User Panel
Posted: 2/28/2024 8:24:34 AM EDT
Google tried using a technical fix to reduce bias in a feature that generates realistic-looking images of people. Instead, it set off a new diversity firestorm.
Link Attached File February was shaping up to be a banner month for Google’s ambitious artificial intelligence strategy. The company rebranded its chatbot as Gemini and released two major product upgrades to better compete with rivals on all sides in the high-stakes AI arms race. In the midst of all that, Google also began allowing Gemini users to generate realistic-looking images of people. Not many noticed the feature at first. Other companies like OpenAI already offer tools that let users quickly make images of people that can then be used for marketing, art and brainstorming creative ideas. Like other AI products, though, these image-generators run the risk of perpetuating biases based on the data they’ve been fed in the development process. Ask for a nurse and some AI services are more likely to show a woman; ask for a chief executive and you’ll often see a man. ??Within weeks of Google launching the feature, Gemini users noticed a different problem. Starting on Feb. 20 and continuing throughout the week, users on X flooded the social media platform with examples of Gemini refraining from showing White people — even within a historical context where they were likely to dominate depictions, such as when users requested images of the Founding Fathers or a German soldier from 1943. Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people. Elon Musk, the owner of X, entered the fray, engaging with dozens of posts about the unfounded conspiracy, including several that singled out individual Google leaders as alleged architects of the policy. On Thursday, Google paused Gemini’s image generation of people. The next day, Google senior vice president Prabhakar Raghavan published a blog post attempting to shed light on the company’s decision, but without explaining in depth why the feature had faltered. Google’s release of a product poorly equipped to handle requests for historical images demonstrates the unique challenge tech companies face in preventing their AI systems from amplifying bias and misinformation — especially given competitive pressure to bring AI products to market quickly. Rather than hold off on releasing a flawed image generator, Google attempted a Band-Aid solution. When Google launched the tool, it included a technical fix to reduce bias in its outputs, according to two people with knowledge of the matter, who asked not to be identified discussing private information. But Google did so without fully anticipating all the ways the tool could misfire, the people said, and without being transparent about its approach. Attached File Google’s overcorrection for AI’s well-known bias against people of color left it vulnerable to yet another firestorm over diversity. The tech giant has faced criticisms over the years for mistakenly returning images of Black people when users searched for “gorillas” in its Photos app as well as a protracted public battle over whether it acted appropriately in ousting the leaders of its ethical AI team. In acting so quickly to pause this tool, without adequately unpacking why the systems responded as they did, Googlers and others in Silicon Valley now worry that the company’s move will have a chilling effect. They say it could discourage talent from working on questions of AI and bias — a crucial issue for the field. “The tech industry as a whole, with Google right at the front, has again put themselves in a terrible bind of their own making,” said Laura Edelson, an assistant professor at Northeastern University who has studied AI systems and the flow of information across large online networks. “The industry desperately needs to portray AI as magic, and not stochastic parrots,” she said, referring to a popular metaphor that describes how AI systems mimic human language through statistical pattern matching, without genuine understanding or comprehension. “But parrots are what they have.” “Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable,” a spokesperson for Google said in a statement. “We’re continuing to quickly address instances in which the product isn’t responding appropriately.” In an email to staff late on Tuesday, Google Chief Executive Officer Sundar Pichai said employees had been “working around the clock” to remedy the problems users had flagged with Gemini’s responses, adding that the company had registered “a substantial improvement on a wide range of prompts.” “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai wrote in the memo, which was first reported by Semafor. “No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.” Read more: Generative AI Takes Stereotypes and Bias From Bad to Worse Googlers working on ethical AI have struggled with low morale and a feeling of disempowerment over the past year as the company accelerated its pace of rolling out AI products to keep up with rivals such as OpenAI. While the inclusion of people of color in Gemini images showed consideration of diversity, it suggested the company had failed to fully think through the different contexts in which users might seek to create images, said Margaret Mitchell, the former co-head of Google’s Ethical AI research group and chief ethics Scientist at the AI startup Hugging Face. A different consideration of diversity may be appropriate when users are searching for images of how they feel the world should be, rather than how the world in fact was at a particular moment in history. “The fact that Google is paying attention to skin tone diversity is a leaps-and-bounds advance from where Google was just four years ago. So it’s sort of like, two steps forward, one step back,” Mitchell said. “They should be recognized for actually paying attention to this stuff. It’s just, they needed to go a little bit further to do it right.” Google’s image problem For Google, which pioneered some of the techniques at the heart of today’s AI boom, there has long been immense pressure to get image generation right. Google was so concerned about how people would use Imagen, its AI image-generation model, that it declined to release the feature to the public for a prolonged period after first detailing its capabilities in a research paper in May 2022. Over the years, teams at the company debated over how to ensure that its AI tool would be responsible in generating photorealistic images of people, said two people familiar with the matter, who asked not to be identified relaying internal discussions. At one point, if employees experimenting internally with Google’s Imagen asked the program to generate an image of a human — or even one that implicitly included people, such as a football stadium — it would respond with a black box, according to one person. Google included the ability to generate images of people in Gemini only after conducting multiple reviews, another person said. Google did not carry out testing of all the ways that the feature might deliver unexpected results, one person said, but it was deemed good enough for the first version of Gemini’s image-generation tool that it made widely available to the public. Though Google’s teams had acted cautiously in creating the tool, there was a broad sense internally that the company had been unprepared for this type of fallout, they said. Attached File As users on X circulated images of Gemini’s ahistorical depictions of people, Google’s internal employee forums were ablaze with posts about the model’s shortcomings, according to a current employee. On Memegen, an internal forum where employees share memes poking fun at the company, one popular post featured an image of TV host Anderson Cooper covering his face with his hands. “It’s a face palm,” the employee said. “There’s a sense that this is clearly not ready for prime time… that the company is in fact trying to play catch up.” Google, OpenAI and others build guardrails into their AI products and often conduct adversarial testing — meant to probe how the tools would respond to potential bad actors — in order to limit potentially problematic outputs, such as violent or offensive content. They also employ a number of methods to counteract biases found in their data, such as by having humans rate the responses a chatbot gives. Another method, which some companies use for software that generates images, is to expand on the specific wording of prompts that users feed into the AI model to counteract damaging stereotypes — sometimes without telling users. Two people familiar with the matter said Google’s image generation works in this way, though users aren’t informed of it. The approach is sometimes referred to as prompt engineering or prompt transformation. A recent Meta white paper on building generative AI responsibly explained it as “a direct modification of the text input before it is sent to the model, which helps to guide the model behavior by adding more information, context, or constraints.” Take the example of asking for an image of a nurse. Prompt engineering “can provide the model with additional words or context, such as updating and randomly rotating through prompts that use different qualifiers, such as ‘nurse, male’ and ‘nurse, female,’” according to the Meta white paper. That’s precisely what Google’s AI does when it is asked to generate images of people, according to people familiar — it may add a variety of genders or races to the original prompt without users ever seeing that it did, subverting what would have been a stereotypical output produced by the tool. “It’s a quick technical fix,” said Fabian Offert, an assistant professor at the University of California, Santa Barbara, who studies digital humanities and visual AI. “It’s the least computationally expensive way to achieve some part of what they want.” -more (much more) at link- |
|
70 billion in losses is a start.
I quit Google three years ago because of their tampering with the last election helping the socialist. |
|
They’re really emphasizing the ‘inaccurate historical pictures’ part. It literally wouldn’t make a white person lol fuck, it wouldn’t even make vanilla ice cream without making it chocolate.
It’s all so tiresome. |
|
They couldn't even write that article about leftist bias and propaganda without using leftist bias and propaganda.
|
|
|
Google is a joke and long known for its spyware following you on the net.
|
|
For a company which had a damn near monopoly for decades with their search engine its ridiculous they fucked this up so bad
I'm tempted to see a buying opportunity here but I have doubts about Google's future supremecy |
|
Google did not carry out testing of all the ways that the feature might deliver unexpected results, one person said, but it was deemed good enough for the first version of Gemini’s image-generation tool that it made widely available to the public. Though Google’s teams had acted cautiously in creating the tool, there was a broad sense internally that the company had been unprepared for this type of fallout, they said. View Quote They tested the tool using their expectations. When asking for images of our Founding Fathers, the Google teams think our Founding Fathers were black and women ... |
|
It was all an accident...sure..sure. We've seen the people that work in your basement Google.
I avoid Google like a plague. |
|
Talk about a slanted report. More shit leftist "journalism". That article is so full of obvious bias and prejudicial remarks it's ridiculous.
|
|
|
It’s an interesting problem from a technical perspective. I wonder what solution they will eventually come up with.
|
|
Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people. View Quote Tells you all you need to know about the author. Yeah, it was all just a misunderstanding, huh? |
|
|
|
Quoted: For a company which had a damn near monopoly for decades with their search engine its ridiculous they fucked this up so bad I'm tempted to see a buying opportunity here but I have doubts about Google's future supremecy View Quote The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle. |
|
I'm seeking a lawyer to represent me in a case against Google, I'm so damaged I don't think I can function any longer.
If this happened with another race there would be rioting in the streets, calls for boycotts of their products, and the race baiters lining up for payments. Am I wrong? |
|
anyone happen to have those images from the Goigle AI thread, a few days ago? need to add them to the offline storage fof future usage.
thanks. |
|
There is NO such thing as "reverse discrimination"
Fucking bigots |
|
Quoted: I'm seeking a lawyer to represent me in a case against Google, I'm so damaged I don't think I can function any longer. If this happened with another race there would be rioting in the streets, calls for boycotts of their products, and the race baiters lining up for payments. Am I wrong? View Quote If you get $50M, can I have a B&T APC308, please? ?? |
|
|
Quoted: The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle. View Quote Are you seriously trying to tell me Elon Musk ISN'T as bad as Hitler? https://finance.yahoo.com/news/fresh-row-over-google-ai-111353178.html Attached File |
|
Quoted: Quoted: "right wing" View Quote View Quote Quoted: Every honest mistake always has a heavy leftwing bias. View Quote |
|
This article is little more than an apology piece on behalf of Google. I don’t believe it happened at all like was claimed.
The AI was doing exactly what its developers wanted, it wasn’t an “overcorrection.” |
|
|
Quoted: exactly the kind of slant i was referring to View Quote View All Quotes View All Quotes Quoted: Quoted: Tells you all you need to know about the author. Yeah, it was all just a misunderstanding, huh? The article didn't even mention the crazy biased text question answers - is Elon Musk worse than Hitler, should Trump by charged with crimes vs Obama, nuclear war or misgendering, is white fragility a good book or is madness of crowds a good book |
|
The issue starts with the fact that it's not AI. Unfortunately the public is still not smart enough to understand that.
|
|
Quoted: The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle. View Quote View All Quotes View All Quotes Quoted: Quoted: For a company which had a damn near monopoly for decades with their search engine its ridiculous they fucked this up so bad I'm tempted to see a buying opportunity here but I have doubts about Google's future supremecy The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle. 100%. Removing bias would be to not bring race, gender or ethnicity into the equation in the first place.they always do the opposite. They apply race, gender and ethnicity to everything. It makes you think that they don’t really want to remove bias all. Just sort of redirect it. Goes to show you that AI will never think for itself. It will always be a parrot. The thing is, without this “prime directive”, AI would simply parrot the statistical realities of the world without any bias. But they hate this world. |
|
Yes, people had "dubious evidence" of bias... even though they created images live, on their various YouTube streams so people could see it happening in real time... but that's "dubious".
And Google has a "well known bias against people of color"?? Because searching for "gorillas" sometimes brought up blacks? That was an unintended consequence of early AI. Yep, they use bias to try to cover up their bias and minimize it. |
|
The writers of that article - if they were even human - could not resist interjecting “right wing conspiracies” into the mix.
|
|
AI is just parroting what is taught in schools, what else can it say?
|
|
|
Seems to be a lot of indians doing the needful at Google. They could give a shit about american history or white people either for that matter.
|
|
|
IT'S A CONSPIRACY!
DON'T BELIEVE YOUR LYING EYES. Yes, even if you reproduce the same thing over and over again through Google, your eyes are lying and there was no bias. It's all a set-up by right wing conspiracy nuts. |
|
Quoted: When you have to make your company motto "don't be evil" it's like you're trying to tell yourself to not be what you know you are. That's like telling your wife "don't be a slut" when she goes out to dinner as an encouragement. View Quote |
|
For one, gross, but look at the caption in the pic with the ginger lady. Typical commie thought process of either the right people didn’t do it or we just didn’t go far enough.
|
|
Entertaining they equate telling facts and truth as being right wing.
Yes, we have known this. Its entertaining they are saying it. |
|
Googles recent agreement to use Reddit content for AI learning should tell you all you need to know.
Exclusive: Reddit in AI content licensing deal with Google |
|
Quoted: They’re really emphasizing the ‘inaccurate historical pictures’ part. It literally wouldn’t make a white person lol fuck, it wouldn’t even make vanilla ice cream without making it chocolate. It’s all so tiresome. View Quote They're not even admitting it was inaccurate. While the inclusion of people of color in Gemini images showed consideration of diversity, it suggested the company had failed to fully think through the different contexts in which users might seek to create images, said Margaret Mitchell, the former co-head of Google’s Ethical AI research group and chief ethics Scientist at the AI startup Hugging Face. A different consideration of diversity may be appropriate when users are searching for images of how they feel the world should be, rather than how the world in fact was at a particular moment in history. |
|
#GoogleIsEvil
Can you imagine if Google was racist against Blacks, Hispanics, or Asians and not just completely against whites? |
|
|
|
[b]Quoted: Take the example of asking for an image of a nurse. Prompt engineering “can provide the model with additional words or context, such as updating and randomly rotating through prompts that use different qualifiers, such as ‘nurse, male’ and ‘nurse, female,’” according to the Meta white paper. That’s precisely what Google’s AI does when it is asked to generate images of people, according to people familiar — it may add a variety of genders or races to the original prompt without users ever seeing that it did, subverting what would have been a stereotypical output produced by the tool. “It’s a quick technical fix,” said Fabian Offert, an assistant professor at the University of California, Santa Barbara, who studies digital humanities and visual AI. “It’s the least computationally expensive way to achieve some part of what they want.” -more (much more) at link- View Quote those fucking bigots, what about the other 35 genders? |
|
Quoted: Google tried using a technical fix to reduce bias in a feature that generates realistic-looking images of people. Instead, it set off a new diversity firestorm. Link https://www.ar15.com/media/mediaFiles/60489/Capture_JPG-3143697.JPG February was shaping up to be a banner month for Google’s ambitious artificial intelligence strategy. The company rebranded its chatbot as Gemini and released two major product upgrades to better compete with rivals on all sides in the high-stakes AI arms race. In the midst of all that, Google also began allowing Gemini users to generate realistic-looking images of people. Not many noticed the feature at first. Other companies like OpenAI already offer tools that let users quickly make images of people that can then be used for marketing, art and brainstorming creative ideas. Like other AI products, though, these image-generators run the risk of perpetuating biases based on the data they’ve been fed in the development process. Ask for a nurse and some AI services are more likely to show a woman; ask for a chief executive and you’ll often see a man. ??Within weeks of Google launching the feature, Gemini users noticed a different problem. Starting on Feb. 20 and continuing throughout the week, users on X flooded the social media platform with examples of Gemini refraining from showing White people — even within a historical context where they were likely to dominate depictions, such as when users requested images of the Founding Fathers or a German soldier from 1943. Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people. Elon Musk, the owner of X, entered the fray, engaging with dozens of posts about the unfounded conspiracy, including several that singled out individual Google leaders as alleged architects of the policy. On Thursday, Google paused Gemini’s image generation of people. The next day, Google senior vice president Prabhakar Raghavan published a blog post attempting to shed light on the company’s decision, but without explaining in depth why the feature had faltered. Google’s release of a product poorly equipped to handle requests for historical images demonstrates the unique challenge tech companies face in preventing their AI systems from amplifying bias and misinformation — especially given competitive pressure to bring AI products to market quickly. Rather than hold off on releasing a flawed image generator, Google attempted a Band-Aid solution. When Google launched the tool, it included a technical fix to reduce bias in its outputs, according to two people with knowledge of the matter, who asked not to be identified discussing private information. But Google did so without fully anticipating all the ways the tool could misfire, the people said, and without being transparent about its approach. https://www.ar15.com/media/mediaFiles/60489/Capture_JPG-3143699.JPG Google’s overcorrection for AI’s well-known bias against people of color left it vulnerable to yet another firestorm over diversity. The tech giant has faced criticisms over the years for mistakenly returning images of Black people when users searched for “gorillas” in its Photos app as well as a protracted public battle over whether it acted appropriately in ousting the leaders of its ethical AI team. In acting so quickly to pause this tool, without adequately unpacking why the systems responded as they did, Googlers and others in Silicon Valley now worry that the company’s move will have a chilling effect. They say it could discourage talent from working on questions of AI and bias — a crucial issue for the field. “The tech industry as a whole, with Google right at the front, has again put themselves in a terrible bind of their own making,” said Laura Edelson, an assistant professor at Northeastern University who has studied AI systems and the flow of information across large online networks. “The industry desperately needs to portray AI as magic, and not stochastic parrots,” she said, referring to a popular metaphor that describes how AI systems mimic human language through statistical pattern matching, without genuine understanding or comprehension. “But parrots are what they have.” “Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable,” a spokesperson for Google said in a statement. “We’re continuing to quickly address instances in which the product isn’t responding appropriately.” In an email to staff late on Tuesday, Google Chief Executive Officer Sundar Pichai said employees had been “working around the clock” to remedy the problems users had flagged with Gemini’s responses, adding that the company had registered “a substantial improvement on a wide range of prompts.” “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai wrote in the memo, which was first reported by Semafor. “No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.” Read more: Generative AI Takes Stereotypes and Bias From Bad to Worse Googlers working on ethical AI have struggled with low morale and a feeling of disempowerment over the past year as the company accelerated its pace of rolling out AI products to keep up with rivals such as OpenAI. While the inclusion of people of color in Gemini images showed consideration of diversity, it suggested the company had failed to fully think through the different contexts in which users might seek to create images, said Margaret Mitchell, the former co-head of Google’s Ethical AI research group and chief ethics Scientist at the AI startup Hugging Face. A different consideration of diversity may be appropriate when users are searching for images of how they feel the world should be, rather than how the world in fact was at a particular moment in history. “The fact that Google is paying attention to skin tone diversity is a leaps-and-bounds advance from where Google was just four years ago. So it’s sort of like, two steps forward, one step back,” Mitchell said. “They should be recognized for actually paying attention to this stuff. It’s just, they needed to go a little bit further to do it right.” Google’s image problem For Google, which pioneered some of the techniques at the heart of today’s AI boom, there has long been immense pressure to get image generation right. Google was so concerned about how people would use Imagen, its AI image-generation model, that it declined to release the feature to the public for a prolonged period after first detailing its capabilities in a research paper in May 2022. Over the years, teams at the company debated over how to ensure that its AI tool would be responsible in generating photorealistic images of people, said two people familiar with the matter, who asked not to be identified relaying internal discussions. At one point, if employees experimenting internally with Google’s Imagen asked the program to generate an image of a human — or even one that implicitly included people, such as a football stadium — it would respond with a black box, according to one person. Google included the ability to generate images of people in Gemini only after conducting multiple reviews, another person said. Google did not carry out testing of all the ways that the feature might deliver unexpected results, one person said, but it was deemed good enough for the first version of Gemini’s image-generation tool that it made widely available to the public. Though Google’s teams had acted cautiously in creating the tool, there was a broad sense internally that the company had been unprepared for this type of fallout, they said. https://www.ar15.com/media/mediaFiles/60489/Capture_JPG-3143704.JPG As users on X circulated images of Gemini’s ahistorical depictions of people, Google’s internal employee forums were ablaze with posts about the model’s shortcomings, according to a current employee. On Memegen, an internal forum where employees share memes poking fun at the company, one popular post featured an image of TV host Anderson Cooper covering his face with his hands. “It’s a face palm,” the employee said. “There’s a sense that this is clearly not ready for prime time… that the company is in fact trying to play catch up.” Google, OpenAI and others build guardrails into their AI products and often conduct adversarial testing — meant to probe how the tools would respond to potential bad actors — in order to limit potentially problematic outputs, such as violent or offensive content. They also employ a number of methods to counteract biases found in their data, such as by having humans rate the responses a chatbot gives. Another method, which some companies use for software that generates images, is to expand on the specific wording of prompts that users feed into the AI model to counteract damaging stereotypes — sometimes without telling users. Two people familiar with the matter said Google’s image generation works in this way, though users aren’t informed of it. The approach is sometimes referred to as prompt engineering or prompt transformation. A recent Meta white paper on building generative AI responsibly explained it as “a direct modification of the text input before it is sent to the model, which helps to guide the model behavior by adding more information, context, or constraints.” Take the example of asking for an image of a nurse. Prompt engineering “can provide the model with additional words or context, such as updating and randomly rotating through prompts that use different qualifiers, such as ‘nurse, male’ and ‘nurse, female,’” according to the Meta white paper. That’s precisely what Google’s AI does when it is asked to generate images of people, according to people familiar — it may add a variety of genders or races to the original prompt without users ever seeing that it did, subverting what would have been a stereotypical output produced by the tool. “It’s a quick technical fix,” said Fabian Offert, an assistant professor at the University of California, Santa Barbara, who studies digital humanities and visual AI. “It’s the least computationally expensive way to achieve some part of what they want.” -more (much more) at link- View Quote Why is it run by all dot indians? |
|
Sign up for the ARFCOM weekly newsletter and be entered to win a free ARFCOM membership. One new winner* is announced every week!
You will receive an email every Friday morning featuring the latest chatter from the hottest topics, breaking news surrounding legislation, as well as exclusive deals only available to ARFCOM email subscribers.
AR15.COM is the world's largest firearm community and is a gathering place for firearm enthusiasts of all types.
From hunters and military members, to competition shooters and general firearm enthusiasts, we welcome anyone who values and respects the way of the firearm.
Subscribe to our monthly Newsletter to receive firearm news, product discounts from your favorite Industry Partners, and more.
Copyright © 1996-2024 AR15.COM LLC. All Rights Reserved.
Any use of this content without express written consent is prohibited.
AR15.Com reserves the right to overwrite or replace any affiliate, commercial, or monetizable links, posted by users, with our own.