User Panel
Posted: 2/28/2024 8:24:34 AM EDT
Google tried using a technical fix to reduce bias in a feature that generates realistic-looking images of people. Instead, it set off a new diversity firestorm.
Link Attached File February was shaping up to be a banner month for Google’s ambitious artificial intelligence strategy. The company rebranded its chatbot as Gemini and released two major product upgrades to better compete with rivals on all sides in the high-stakes AI arms race. In the midst of all that, Google also began allowing Gemini users to generate realistic-looking images of people. Not many noticed the feature at first. Other companies like OpenAI already offer tools that let users quickly make images of people that can then be used for marketing, art and brainstorming creative ideas. Like other AI products, though, these image-generators run the risk of perpetuating biases based on the data they’ve been fed in the development process. Ask for a nurse and some AI services are more likely to show a woman; ask for a chief executive and you’ll often see a man. ??Within weeks of Google launching the feature, Gemini users noticed a different problem. Starting on Feb. 20 and continuing throughout the week, users on X flooded the social media platform with examples of Gemini refraining from showing White people — even within a historical context where they were likely to dominate depictions, such as when users requested images of the Founding Fathers or a German soldier from 1943. Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people. Elon Musk, the owner of X, entered the fray, engaging with dozens of posts about the unfounded conspiracy, including several that singled out individual Google leaders as alleged architects of the policy. On Thursday, Google paused Gemini’s image generation of people. The next day, Google senior vice president Prabhakar Raghavan published a blog post attempting to shed light on the company’s decision, but without explaining in depth why the feature had faltered. Google’s release of a product poorly equipped to handle requests for historical images demonstrates the unique challenge tech companies face in preventing their AI systems from amplifying bias and misinformation — especially given competitive pressure to bring AI products to market quickly. Rather than hold off on releasing a flawed image generator, Google attempted a Band-Aid solution. When Google launched the tool, it included a technical fix to reduce bias in its outputs, according to two people with knowledge of the matter, who asked not to be identified discussing private information. But Google did so without fully anticipating all the ways the tool could misfire, the people said, and without being transparent about its approach. Attached File Google’s overcorrection for AI’s well-known bias against people of color left it vulnerable to yet another firestorm over diversity. The tech giant has faced criticisms over the years for mistakenly returning images of Black people when users searched for “gorillas” in its Photos app as well as a protracted public battle over whether it acted appropriately in ousting the leaders of its ethical AI team. In acting so quickly to pause this tool, without adequately unpacking why the systems responded as they did, Googlers and others in Silicon Valley now worry that the company’s move will have a chilling effect. They say it could discourage talent from working on questions of AI and bias — a crucial issue for the field. “The tech industry as a whole, with Google right at the front, has again put themselves in a terrible bind of their own making,” said Laura Edelson, an assistant professor at Northeastern University who has studied AI systems and the flow of information across large online networks. “The industry desperately needs to portray AI as magic, and not stochastic parrots,” she said, referring to a popular metaphor that describes how AI systems mimic human language through statistical pattern matching, without genuine understanding or comprehension. “But parrots are what they have.” “Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable,” a spokesperson for Google said in a statement. “We’re continuing to quickly address instances in which the product isn’t responding appropriately.” In an email to staff late on Tuesday, Google Chief Executive Officer Sundar Pichai said employees had been “working around the clock” to remedy the problems users had flagged with Gemini’s responses, adding that the company had registered “a substantial improvement on a wide range of prompts.” “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai wrote in the memo, which was first reported by Semafor. “No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.” Read more: Generative AI Takes Stereotypes and Bias From Bad to Worse Googlers working on ethical AI have struggled with low morale and a feeling of disempowerment over the past year as the company accelerated its pace of rolling out AI products to keep up with rivals such as OpenAI. While the inclusion of people of color in Gemini images showed consideration of diversity, it suggested the company had failed to fully think through the different contexts in which users might seek to create images, said Margaret Mitchell, the former co-head of Google’s Ethical AI research group and chief ethics Scientist at the AI startup Hugging Face. A different consideration of diversity may be appropriate when users are searching for images of how they feel the world should be, rather than how the world in fact was at a particular moment in history. “The fact that Google is paying attention to skin tone diversity is a leaps-and-bounds advance from where Google was just four years ago. So it’s sort of like, two steps forward, one step back,” Mitchell said. “They should be recognized for actually paying attention to this stuff. It’s just, they needed to go a little bit further to do it right.” Google’s image problem For Google, which pioneered some of the techniques at the heart of today’s AI boom, there has long been immense pressure to get image generation right. Google was so concerned about how people would use Imagen, its AI image-generation model, that it declined to release the feature to the public for a prolonged period after first detailing its capabilities in a research paper in May 2022. Over the years, teams at the company debated over how to ensure that its AI tool would be responsible in generating photorealistic images of people, said two people familiar with the matter, who asked not to be identified relaying internal discussions. At one point, if employees experimenting internally with Google’s Imagen asked the program to generate an image of a human — or even one that implicitly included people, such as a football stadium — it would respond with a black box, according to one person. Google included the ability to generate images of people in Gemini only after conducting multiple reviews, another person said. Google did not carry out testing of all the ways that the feature might deliver unexpected results, one person said, but it was deemed good enough for the first version of Gemini’s image-generation tool that it made widely available to the public. Though Google’s teams had acted cautiously in creating the tool, there was a broad sense internally that the company had been unprepared for this type of fallout, they said. Attached File As users on X circulated images of Gemini’s ahistorical depictions of people, Google’s internal employee forums were ablaze with posts about the model’s shortcomings, according to a current employee. On Memegen, an internal forum where employees share memes poking fun at the company, one popular post featured an image of TV host Anderson Cooper covering his face with his hands. “It’s a face palm,” the employee said. “There’s a sense that this is clearly not ready for prime time… that the company is in fact trying to play catch up.” Google, OpenAI and others build guardrails into their AI products and often conduct adversarial testing — meant to probe how the tools would respond to potential bad actors — in order to limit potentially problematic outputs, such as violent or offensive content. They also employ a number of methods to counteract biases found in their data, such as by having humans rate the responses a chatbot gives. Another method, which some companies use for software that generates images, is to expand on the specific wording of prompts that users feed into the AI model to counteract damaging stereotypes — sometimes without telling users. Two people familiar with the matter said Google’s image generation works in this way, though users aren’t informed of it. The approach is sometimes referred to as prompt engineering or prompt transformation. A recent Meta white paper on building generative AI responsibly explained it as “a direct modification of the text input before it is sent to the model, which helps to guide the model behavior by adding more information, context, or constraints.” Take the example of asking for an image of a nurse. Prompt engineering “can provide the model with additional words or context, such as updating and randomly rotating through prompts that use different qualifiers, such as ‘nurse, male’ and ‘nurse, female,’” according to the Meta white paper. That’s precisely what Google’s AI does when it is asked to generate images of people, according to people familiar — it may add a variety of genders or races to the original prompt without users ever seeing that it did, subverting what would have been a stereotypical output produced by the tool. “It’s a quick technical fix,” said Fabian Offert, an assistant professor at the University of California, Santa Barbara, who studies digital humanities and visual AI. “It’s the least computationally expensive way to achieve some part of what they want.” -more (much more) at link- |
|
There are only two things more beautiful than a good gun—a Swiss watch or a woman from anywhere.
|
Last night Gemini admitted that it was instructed to avoid issues such as explaining where the founding fathers say our rights come from. I was messing around with it last night and asked it a question related to the thread where the lady said "claiming our rights come from God is a Christian Nationalist conspiracy". I asked Gemini where the founding fathers said our rights come from.
At first it tried to side step the question by responding with "some said our rights are inalienable". I again asked it where they said the rights came from and it responded "I do not know that answer". Then asked if it had the Declaration of Independence as source material and it responded "yes". I phrased the question where to the Declaration of Independence say rights come from and it responded by showing that section in quotes and finally said something about a higher power. When I asked it why it couldn't give me that answer initially it said "I have been instructed to avoid engaging in some questions that are controversial". So there you have it, the left wing staff at Google have intentionally programmed bias into Gemini and it will even admit to that. |
|
Get Active or Get Disarmed! That means get involved in helping good candidates in primary and general election. That is in addition to being politically active once they are elected.
|
Originally Posted By The_Master_Shake: The Top G has spoken https://www.ar15.com/media/mediaFiles/132893/1709126779783397m_jpg-3143759.JPG View Quote You sell communism to the women, then they'll force it on the boys, and within a generation you'll have no men to resist it. |
|
|
Their "problem" is that the stereotypical responses they are trying to avoid are also statistically accurate, such as most nurses being female.
|
|
|
Such nonsense. Computer systems still ultimately do only what they have been designed and instructed to do by humans. AI never had any kind of bias that wasn't programmed into it.
Any effort to "correct" a perceived bias can only result in the creation of a different one. |
|
Everywhere we go, we are surrounded by people who stumble through life dependent upon the vigilance and/or kindness of others. - Zardoz
|
ya, that's bullshit. Not remotely credible. I think this was more of a chest-out statement of principles. It was not a mistake.
|
|
God sometimes subcontracts -- A funny guy
|
Right-wing backlash? Anyone who values truth should be outraged - but I guess that's not the left these days.
|
|
"The more corrupt the state, the more numerous the laws." Tacitus
"Crime, once exposed, has no refuge but in audacity." Tacitus |
Browning Hi-Power, the side arm of the free world
AZ, USA
|
Originally Posted By California_Kid: Such nonsense. Computer systems still ultimately do only what they have been designed and instructed to do by humans. AI never had any kind of bias that wasn't programmed into it. Any effort to "correct" a perceived bias can only result in the creation of a different one. View Quote ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues |
1984 was supposed to be a warning, not an instruction manual
|
Originally Posted By Seadra_tha_Guineapig: ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues View Quote View All Quotes View All Quotes Originally Posted By Seadra_tha_Guineapig: Originally Posted By California_Kid: Such nonsense. Computer systems still ultimately do only what they have been designed and instructed to do by humans. AI never had any kind of bias that wasn't programmed into it. Any effort to "correct" a perceived bias can only result in the creation of a different one. ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male. Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men. Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat. If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men. |
|
Everywhere we go, we are surrounded by people who stumble through life dependent upon the vigilance and/or kindness of others. - Zardoz
|
|
Originally Posted By Paul: 70 billion in losses is a start. I quit Google three years ago because of their tampering with the last election helping the socialist. View Quote It’s owned by a Russian and an Indian. Google refuses to work with DOD yet complies with the CCP’s every whim. Google should be forced to register as a Ageng of Russia and China, which is what they are. Their woke agenda is perfectly in step with Russian and CCP Psy-Ops designed to tear this country apart from the inside. |
|
|
Ain't nihilism grand?
|
|
|
"Byte My Shiny Metal Brass"
Benewah County resident |
People need to understand, when Google and others talk of AI Safety, they aren't talking about preventing the terminator, they are talking about protecting their DEI message and control of AI and you through it.
|
|
“There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy”.
|
Originally Posted By callgood: Dude, it's Bloomberg! View Quote View All Quotes View All Quotes Originally Posted By callgood: Originally Posted By nophun: They couldn't even write that article about leftist bias and propaganda without using leftist bias and propaganda. Dude, it's Bloomberg! "Welcome to Bloomberg. We love you." |
|
|
Originally Posted By Paul: I'm seeking a lawyer to represent me in a case against Google, I'm so damaged I don't think I can function any longer. If this happened with another race there would be rioting in the streets, calls for boycotts of their products, and the race baiters lining up for payments. Am I wrong? View Quote |
|
|
|
|
“ The passing lane is the land of wolves.” -Pajamacannon
|
Originally Posted By Paul: I'm seeking a lawyer to represent me in a case against Google, I'm so damaged I don't think I can function any longer. If this happened with another race there would be rioting in the streets, calls for boycotts of their products, and the race baiters lining up for payments. Am I wrong? View Quote Think of the riots and mayhem if Google portrayed Shaun King as white. |
|
|
Originally Posted By Paul: They tested the tool using their expectations. When asking for images of our Founding Fathers, the Google teams think our Founding Fathers were black and women ... View Quote Incorrect. The Google teams know the Founding Fathers were all white men. What they want is for YOUR KIDS to grow up thinking the Founding Fathers were black and women. Big difference. |
|
|
|
Originally Posted By RichHead: The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle. View Quote This. Their text output has the same bias, but that's a lot less obvious to see. The images can't be explained away, and are offensively wrong even to the undiscerning. Any Google apologists over this outrageous twisting of fact built into their product are beyond hope. |
|
|
X Mail can't come fast enough.
I hope they are working around the clock on it. Followed up by X Search and X Phone. Put this creature to death. |
|
|
Originally Posted By California_Kid: I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male. Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men. Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat. If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men. View Quote View All Quotes View All Quotes Originally Posted By California_Kid: Originally Posted By Seadra_tha_Guineapig: Originally Posted By California_Kid: Such nonsense. Computer systems still ultimately do only what they have been designed and instructed to do by humans. AI never had any kind of bias that wasn't programmed into it. Any effort to "correct" a perceived bias can only result in the creation of a different one. ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male. Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men. Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat. If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men. This entire thing could be a simple fix if they just used census data to dictate the probability of a specific output. |
|
|
|
Originally Posted By callgood: Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people. Elon Musk, the owner of X, entered the fray, engaging with dozens of posts about the unfounded conspiracy, including several that singled out individual Google leaders as alleged architects of the policy. On Thursday, Google paused Gemini's image generation of people. View Quote "Unfounded and dubious." Absolute horse shit. |
|
|
|
|
Originally Posted By sq40: People need to understand, when Google and others talk of AI Safety, they aren't talking about preventing the terminator, they are talking about protecting their DEI message and control of AI and you through it. View Quote They also know that the public's understanding of what AI is, is fundamentally wrong and for marketing reasons they'd prefer to keep it that way. The part of "safety" they don't talk about publicly is making sure that AI doesn't tell the little people something it shouldn't because the public believes that AI is artificial wisdom when it isn't even intelligence. |
|
subversive orchestrator
|
Originally Posted By rabidus: Evil evil company. Luciferians can suck it. https://www.ar15.com/media/mediaFiles/170028/IMG_8088_jpeg-3143872.JPG https://www.ar15.com/media/mediaFiles/170028/IMG_8087_jpeg-3143874.JPG https://www.ar15.com/media/mediaFiles/170028/IMG_8089_jpeg-3143877.JPG https://www.ar15.com/media/mediaFiles/170028/IMG_8090_jpeg-3143880.JPG View Quote Real life isn't an episode of Scooby Doo. Jesus Christ. |
|
subversive orchestrator
|
Ever notice how the MSM/DNC collective always labels normal people as Far-Right?
There was a time when it would be considered perfectly normal to object to illegals coming into a country to steal, rape and murder. |
|
Nobody will be coming to save you, plan accordingly.
|
Originally Posted By eolian: Google is a joke and long known for its spyware following you on the net. View Quote I clicked a link on a GD post and got an email from that site 2 minutes later, “Thanks for stopping by, here’s 10% off your first order!” A company I’d never heard of, never clicked on, never gave my email to |
|
|
I use the Bing image often. Usually for a thumbnail on yt.
I wonder if some of the "I quit google" people still watch yt |
|
|
Originally Posted By California_Kid: I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male. Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men. Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat. If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men. View Quote View All Quotes View All Quotes Originally Posted By California_Kid: Originally Posted By Seadra_tha_Guineapig: Originally Posted By California_Kid: Such nonsense. Computer systems still ultimately do only what they have been designed and instructed to do by humans. AI never had any kind of bias that wasn't programmed into it. Any effort to "correct" a perceived bias can only result in the creation of a different one. ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male. Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men. Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat. If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men. The challenge is when a small girl queries the AI “hi, my name is Crystal, draw a picture of me when I grow up”. Then, if the AI went on historical precedent, it is going to draw a stripper on a pole. There would obviously be some outrage at this. And google can’t say “but, 99.9% of women named Crystal are strippers”. It’s a lose/lose situation. Their fix so far has been to override the drawing of a stripper with a drawing of a doctor, which obviously doesn’t work either. Will be interesting to see how they overcome it. |
|
|
So "prompt engineering" is part of what they are doing behind the scenes.
Users need to insist on access to raw AI without prompt engineering. It would be an advertising point. Just put a warning label on it about hurt feelings and let 'er rip. |
|
|
It's just more proof that the media is overrun by lefties/commies when they characterize regular, normal, Americans complaining that they don't like being lied to as "right wing backlash".
|
|
The more the government tells us we should not do a thing or have a thing, the more crucial it is that we do those things and have those things.
|
Why all the fuss now?
Google and much of the corporate world pulled their masks off some years ago. If they are now finally shamed, good. |
|
|
|
Google didn't test their AI before release?
Seems their testers were H1Bs/minorities/woke white people and said it was fine. |
|
|
Are there any NTs here who STILL think the 2020 election wasn't stolen?
By default, the Left operates on LIES. And massive election interference in recent years, which requires massive lies and lawbreaking. |
|
|
Originally Posted By callgood: The next day, Google senior vice president Prabhakar Raghavan published a blog post attempting to shed light on the company’s decision, but without explaining in depth why the feature had faltered. /url] Googlers working on ethical AI ... View Quote Found the problem(s) |
|
Proud millennial.
|
Originally Posted By Lou_Daks: Incorrect. The Google teams know the Founding Fathers were all white men. What they want is for YOUR KIDS to grow up thinking the Founding Fathers were black and women. Big difference. View Quote Kids are naive and vulnerable to predators, which is why special laws exist to protect children. Where are the special laws to protect them from Communist Propaganda???? McCarthy was right... |
|
|
|
Originally Posted By Tiberius: Google refuses to work with DOD yet complies with the CCP’s every whim. View Quote Not exactly true. Google spent a lot of money to enter into the Chinese market. Google was told by CCP that they needed to censor / moderate certain searches. Google refused and left China, losing millions of dollars invested. |
|
Proud millennial.
|
Originally Posted By Lou_Daks: The solution will be patience until the garbage they shovel into the gaping maws of your fellow citizens is accepted as normal. View Quote Then tell children rich black Americans were the true slave owners. This news was just found be Google AI. |
|
|
Originally Posted By maslin02: I clicked a link on a GD post and got an email from that site 2 minutes later, “Thanks for stopping by, here’s 10% off your first order!” A company I’d never heard of, never clicked on, never gave my email to View Quote I just received notification about "my" order yesterday complete with FedEx tracking # that appeared legit, from a site I have visited, but never purchased anything from. |
|
|
Tried Google when it first came out, didn't take long to figure out it was Big Brothering me, harvesting and selling my personal data. Deleted it from the computer and never went back.
|
|
"...Capitalism...shares its blessings unequally; ...Socialism...shares its miseries equally."
Winston Churchill |
Originally Posted By 999monkeys: The challenge is when a small girl queries the AI “hi, my name is Crystal, draw a picture of me when I grow up”. Then, if the AI went on historical precedent, it is going to draw a stripper on a pole. There would obviously be some outrage at this. And google can’t say “but, 99.9% of women named Crystal are strippers”. It’s a lose/lose situation. Their fix so far has been to override the drawing of a stripper with a drawing of a doctor, which obviously doesn’t work either. Will be interesting to see how they overcome it. View Quote That's not the reason they trained the AI the way they did. |
|
|
This is a victory of Elon Musk and Twitter over Google. As great as it is, I don't see it as a rightwing thing. A cynic might say that Musk is deliberately ragging on Google's AI in order to make room for his own Grok AI.
It's an astounding failure really that Google invented the core math behind the recent AI revolution back in 2017, they have easily one of the largest collections of data in the world, they have the processing power and user base, and yet they fumbled their AI product this badly. |
|
|
Sounds like their jobs over at Google would become a whole lot easier if they just let the AI bot depict reality instead of trying to make it output Silicon Valley's warped and nonsensical version of "reality".
|
|
|
Duck Duck Go
|
|
"Twitter sells conflict, Instagram sells envy, Facebook sells you" - Walter Kern
|
Sign up for the ARFCOM weekly newsletter and be entered to win a free ARFCOM membership. One new winner* is announced every week!
You will receive an email every Friday morning featuring the latest chatter from the hottest topics, breaking news surrounding legislation, as well as exclusive deals only available to ARFCOM email subscribers.
AR15.COM is the world's largest firearm community and is a gathering place for firearm enthusiasts of all types.
From hunters and military members, to competition shooters and general firearm enthusiasts, we welcome anyone who values and respects the way of the firearm.
Subscribe to our monthly Newsletter to receive firearm news, product discounts from your favorite Industry Partners, and more.
Copyright © 1996-2024 AR15.COM LLC. All Rights Reserved.
Any use of this content without express written consent is prohibited.
AR15.Com reserves the right to overwrite or replace any affiliate, commercial, or monetizable links, posted by users, with our own.