Warning

 

Close

Confirm Action

Are you sure you wish to do this?

Confirm Cancel
BCM
User Panel

Site Notices
Arrow Left Previous Page
Page / 4
Posted: 2/28/2024 8:24:34 AM EDT
Google tried using a technical fix to reduce bias in a feature that generates realistic-looking images of people. Instead, it set off a new diversity firestorm.
Link

Attachment Attached File


February was shaping up to be a banner month for Google’s ambitious artificial intelligence strategy. The company rebranded its chatbot as Gemini and released two major product upgrades to better compete with rivals on all sides in the high-stakes AI arms race. In the midst of all that, Google also began allowing Gemini users to generate realistic-looking images of people.

Not many noticed the feature at first. Other companies like OpenAI already offer tools that let users quickly make images of people that can then be used for marketing, art and brainstorming creative ideas. Like other AI products, though, these image-generators run the risk of perpetuating biases based on the data they’ve been fed in the development process. Ask for a nurse and some AI services are more likely to show a woman; ask for a chief executive and you’ll often see a man.

??Within weeks of Google launching the feature, Gemini users noticed a different problem. Starting on Feb. 20 and continuing throughout the week, users on X flooded the social media platform with examples of Gemini refraining from showing White people — even within a historical context where they were likely to dominate depictions, such as when users requested images of the Founding Fathers or a German soldier from 1943. Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people.

Elon Musk, the owner of X, entered the fray, engaging with dozens of posts about the unfounded conspiracy, including several that singled out individual Google leaders as alleged architects of the policy. On Thursday, Google paused Gemini’s image generation of people.

The next day, Google senior vice president Prabhakar Raghavan published a blog post attempting to shed light on the company’s decision, but without explaining in depth why the feature had faltered.

Google’s release of a product poorly equipped to handle requests for historical images demonstrates the unique challenge tech companies face in preventing their AI systems from amplifying bias and misinformation — especially given competitive pressure to bring AI products to market quickly. Rather than hold off on releasing a flawed image generator, Google attempted a Band-Aid solution.

When Google launched the tool, it included a technical fix to reduce bias in its outputs, according to two people with knowledge of the matter, who asked not to be identified discussing private information. But Google did so without fully anticipating all the ways the tool could misfire, the people said, and without being transparent about its approach.

Attachment Attached File


Google’s overcorrection for AI’s well-known bias against people of color left it vulnerable to yet another firestorm over diversity. The tech giant has faced criticisms over the years for mistakenly returning images of Black people when users searched for “gorillas” in its Photos app as well as a protracted public battle over whether it acted appropriately in ousting the leaders of its ethical AI team.

In acting so quickly to pause this tool, without adequately unpacking why the systems responded as they did, Googlers and others in Silicon Valley now worry that the company’s move will have a chilling effect. They say it could discourage talent from working on questions of AI and bias — a crucial issue for the field.

“The tech industry as a whole, with Google right at the front, has again put themselves in a terrible bind of their own making,” said Laura Edelson, an assistant professor at Northeastern University who has studied AI systems and the flow of information across large online networks. “The industry desperately needs to portray AI as magic, and not stochastic parrots,” she said, referring to a popular metaphor that describes how AI systems mimic human language through statistical pattern matching, without genuine understanding or comprehension. “But parrots are what they have.”

“Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable,” a spokesperson for Google said in a statement. “We’re continuing to quickly address instances in which the product isn’t responding appropriately.”

In an email to staff late on Tuesday, Google Chief Executive Officer Sundar Pichai said employees had been “working around the clock” to remedy the problems users had flagged with Gemini’s responses, adding that the company had registered “a substantial improvement on a wide range of prompts.”

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai wrote in the memo, which was first reported by Semafor. “No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”

Read more: Generative AI Takes Stereotypes and Bias From Bad to Worse

Googlers working on ethical AI have struggled with low morale and a feeling of disempowerment over the past year as the company accelerated its pace of rolling out AI products to keep up with rivals such as OpenAI. While the inclusion of people of color in Gemini images showed consideration of diversity, it suggested the company had failed to fully think through the different contexts in which users might seek to create images, said Margaret Mitchell, the former co-head of Google’s Ethical AI research group and chief ethics Scientist at the AI startup Hugging Face. A different consideration of diversity may be appropriate when users are searching for images of how they feel the world should be, rather than how the world in fact was at a particular moment in history.

“The fact that Google is paying attention to skin tone diversity is a leaps-and-bounds advance from where Google was just four years ago. So it’s sort of like, two steps forward, one step back,” Mitchell said. “They should be recognized for actually paying attention to this stuff. It’s just, they needed to go a little bit further to do it right.”

Google’s image problem
For Google, which pioneered some of the techniques at the heart of today’s AI boom, there has long been immense pressure to get image generation right. Google was so concerned about how people would use Imagen, its AI image-generation model, that it declined to release the feature to the public for a prolonged period after first detailing its capabilities in a research paper in May 2022.

Over the years, teams at the company debated over how to ensure that its AI tool would be responsible in generating photorealistic images of people, said two people familiar with the matter, who asked not to be identified relaying internal discussions. At one point, if employees experimenting internally with Google’s Imagen asked the program to generate an image of a human — or even one that implicitly included people, such as a football stadium — it would respond with a black box, according to one person. Google included the ability to generate images of people in Gemini only after conducting multiple reviews, another person said.

Google did not carry out testing of all the ways that the feature might deliver unexpected results, one person said, but it was deemed good enough for the first version of Gemini’s image-generation tool that it made widely available to the public. Though Google’s teams had acted cautiously in creating the tool, there was a broad sense internally that the company had been unprepared for this type of fallout, they said.

Attachment Attached File


As users on X circulated images of Gemini’s ahistorical depictions of people, Google’s internal employee forums were ablaze with posts about the model’s shortcomings, according to a current employee. On Memegen, an internal forum where employees share memes poking fun at the company, one popular post featured an image of TV host Anderson Cooper covering his face with his hands.

“It’s a face palm,” the employee said. “There’s a sense that this is clearly not ready for prime time… that the company is in fact trying to play catch up.”

Google, OpenAI and others build guardrails into their AI products and often conduct adversarial testing — meant to probe how the tools would respond to potential bad actors — in order to limit potentially problematic outputs, such as violent or offensive content. They also employ a number of methods to counteract biases found in their data, such as by having humans rate the responses a chatbot gives. Another method, which some companies use for software that generates images, is to expand on the specific wording of prompts that users feed into the AI model to counteract damaging stereotypes — sometimes without telling users.

Two people familiar with the matter said Google’s image generation works in this way, though users aren’t informed of it. The approach is sometimes referred to as prompt engineering or prompt transformation. A recent Meta white paper on building generative AI responsibly explained it as “a direct modification of the text input before it is sent to the model, which helps to guide the model behavior by adding more information, context, or constraints.”

Take the example of asking for an image of a nurse. Prompt engineering “can provide the model with additional words or context, such as updating and randomly rotating through prompts that use different qualifiers, such as ‘nurse, male’ and ‘nurse, female,’” according to the Meta white paper. That’s precisely what Google’s AI does when it is asked to generate images of people, according to people familiar — it may add a variety of genders or races to the original prompt without users ever seeing that it did, subverting what would have been a stereotypical output produced by the tool.

“It’s a quick technical fix,” said Fabian Offert, an assistant professor at the University of California, Santa Barbara, who studies digital humanities and visual AI. “It’s the least computationally expensive way to achieve some part of what they want.”

-more (much more) at link-


Link Posted: 2/28/2024 8:26:50 AM EDT
[#1]
70 billion in losses is a start.

I quit Google three years ago because of their tampering with the last election helping the socialist.
Link Posted: 2/28/2024 8:29:15 AM EDT
[#2]
Google.

If you are Normal and White, we hate you.

.
Link Posted: 2/28/2024 8:29:20 AM EDT
[#3]
They’re really emphasizing the ‘inaccurate historical pictures’ part. It literally wouldn’t make a white person lol fuck, it wouldn’t even make vanilla ice cream without making it chocolate.

It’s all so tiresome.
Link Posted: 2/28/2024 8:32:01 AM EDT
[#4]
Google/Alphabet are evil
Link Posted: 2/28/2024 8:32:10 AM EDT
[#5]
They couldn't even write that article about leftist bias and propaganda without using leftist bias and propaganda.
Link Posted: 2/28/2024 8:33:21 AM EDT
[#6]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Tejas1836:
Google/Alphabet are evil
View Quote


Democrats are evil.
Link Posted: 2/28/2024 8:34:42 AM EDT
[#7]
Google is a joke and long known for its spyware following you on the net.
Link Posted: 2/28/2024 8:35:36 AM EDT
[Last Edit: The_Master_Shake] [#8]
For a company which had a damn near monopoly for decades with their search engine its ridiculous they fucked this up so bad

I'm tempted to see a buying opportunity here but I have doubts about Google's future supremecy
Link Posted: 2/28/2024 8:35:37 AM EDT
[#9]
Google did not carry out testing of all the ways that the feature might deliver unexpected results, one person said, but it was deemed good enough for the first version of Gemini’s image-generation tool that it made widely available to the public. Though Google’s teams had acted cautiously in creating the tool, there was a broad sense internally that the company had been unprepared for this type of fallout, they said.
View Quote


They tested the tool using their expectations. When asking for images of our Founding Fathers, the Google teams think our Founding Fathers were black and women ...
Link Posted: 2/28/2024 8:35:39 AM EDT
[#10]
It was all an accident...sure..sure. We've seen the people that work in your basement Google.
I avoid Google like a plague.
Link Posted: 2/28/2024 8:36:09 AM EDT
[Last Edit: Jodan1776] [#11]
Talk about a slanted report.   More shit leftist "journalism".     That article is so full of obvious bias and prejudicial remarks it's ridiculous.
Link Posted: 2/28/2024 8:36:33 AM EDT
[#12]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Tejas1836:
Google/Alphabet are evil
View Quote


When you have to make your company motto "don't be evil" it's like you're trying to tell yourself to not be what you know you are.

That's like telling your wife "don't be a slut" when she goes out to dinner as an encouragement.
Link Posted: 2/28/2024 8:36:54 AM EDT
[#13]
It’s an interesting problem from a technical perspective.  I wonder what solution they will eventually come up with.
Link Posted: 2/28/2024 8:37:17 AM EDT
[#14]
Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people.
View Quote


Tells you all you need to know about the author.  Yeah, it was all just a misunderstanding, huh?
Link Posted: 2/28/2024 8:38:03 AM EDT
[#15]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By fenderCAB:
Tells you all you need to know about the author.  Yeah, it was all just a misunderstanding, huh?
View Quote
exactly the kind of slant i was referring to
Link Posted: 2/28/2024 8:39:21 AM EDT
[#16]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By nophun:
They couldn't even write that article about leftist bias and propaganda without using leftist bias and propaganda.
View Quote

Dude, it's Bloomberg!
Link Posted: 2/28/2024 8:39:45 AM EDT
[#17]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By The_Master_Shake:
For a company which had a damn near monopoly for decades with their search engine its ridiculous they fucked this up so bad

I'm tempted to see a buying opportunity here but I have doubts about Google's future supremecy
View Quote


The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle.
Link Posted: 2/28/2024 8:39:51 AM EDT
[#18]
I'm seeking a lawyer to represent me in a case against Google, I'm so damaged I don't think I can function any longer.

If this happened with another race there would be rioting in the streets, calls for boycotts of their products, and the race baiters lining up for payments.

Am I wrong?
Link Posted: 2/28/2024 8:40:00 AM EDT
[#19]
"right wing"
Link Posted: 2/28/2024 8:41:23 AM EDT
[#20]
anyone happen to have those images from the Goigle AI thread, a few days ago? need to add them to the offline storage fof future usage.
thanks.
Link Posted: 2/28/2024 8:42:05 AM EDT
[#21]
There is NO such thing as "reverse discrimination"

Fucking bigots
Link Posted: 2/28/2024 8:43:41 AM EDT
[#22]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Paul:
I'm seeking a lawyer to represent me in a case against Google, I'm so damaged I don't think I can function any longer.

If this happened with another race there would be rioting in the streets, calls for boycotts of their products, and the race baiters lining up for payments.

Am I wrong?
View Quote


If you get $50M, can I have a B&T APC308, please? 😇
Link Posted: 2/28/2024 8:45:01 AM EDT
[#23]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By John_Wayne777:
"right wing"
View Quote


Unfounded.  Dubious evidence.
Link Posted: 2/28/2024 8:50:12 AM EDT
[Last Edit: The_Master_Shake] [#24]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By RichHead:


The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle.
View Quote


Are you seriously trying to tell me Elon Musk ISN'T as bad as Hitler?

https://finance.yahoo.com/news/fresh-row-over-google-ai-111353178.html

Attachment Attached File
Link Posted: 2/28/2024 8:53:20 AM EDT
[#25]
Every honest mistake always has a heavy leftwing bias.
Link Posted: 2/28/2024 8:54:34 AM EDT
[Last Edit: runcible] [#26]
Originally Posted By Augustine:
Originally Posted By John_Wayne777:
"right wing"
View Quote
Unfounded.  Dubious evidence.
View Quote
I know, right? It's blatantly Orwellian-level propagandizing.

Originally Posted By SoonerBorn:
Every honest mistake always has a heavy leftwing bias.
View Quote
Funny how that works, huh?
Link Posted: 2/28/2024 8:54:57 AM EDT
[#27]
This article is little more than an apology piece on behalf of Google. I don’t believe it happened at all like was claimed.

The AI was doing exactly what its developers wanted, it wasn’t an “overcorrection.”
Link Posted: 2/28/2024 8:58:31 AM EDT
[#28]
AI appears to not be immune to the 80/20 rule.
Link Posted: 2/28/2024 8:58:36 AM EDT
[#29]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By nophun:
They couldn't even write that article about leftist bias and propaganda without using leftist bias and propaganda.
View Quote

THIS!
Link Posted: 2/28/2024 8:59:38 AM EDT
[#30]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Jodan1776:
exactly the kind of slant i was referring to
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Jodan1776:
Originally Posted By fenderCAB:
Tells you all you need to know about the author.  Yeah, it was all just a misunderstanding, huh?
exactly the kind of slant i was referring to


The article didn't even mention the crazy biased text question answers - is Elon Musk worse than Hitler,  should Trump by charged with crimes vs Obama,  nuclear war or misgendering, is white fragility a good book or is madness of crowds a good book
Link Posted: 2/28/2024 9:04:00 AM EDT
[#31]
The issue starts with the fact that it's not AI.  Unfortunately the public is still not smart enough to understand that.
Link Posted: 2/28/2024 9:06:33 AM EDT
[#32]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By RichHead:


The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By RichHead:
Originally Posted By The_Master_Shake:
For a company which had a damn near monopoly for decades with their search engine its ridiculous they fucked this up so bad

I'm tempted to see a buying opportunity here but I have doubts about Google's future supremecy


The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle.


100%. Removing bias would be to not bring race, gender or ethnicity into the equation in the first place.they always do the opposite. They apply race, gender and ethnicity to everything.

It makes you think that they don’t really want to remove bias all. Just sort of redirect it.

Goes to show you that AI will never think for itself. It will always be a parrot. The thing is, without this “prime directive”, AI would simply parrot the statistical realities of the world without any bias. But they hate this world.
Link Posted: 2/28/2024 9:07:12 AM EDT
[#33]
Yes, people had "dubious evidence" of bias... even though they created images live, on their various YouTube streams so people could see it happening in real time... but that's "dubious".

And Google has a "well known bias against people of color"??  Because searching for "gorillas" sometimes brought up blacks? That was an unintended consequence of early AI.

Yep, they use bias to try to cover up their bias and minimize it.
Link Posted: 2/28/2024 9:15:22 AM EDT
[#34]
The writers of that article - if they were even human - could not resist interjecting “right wing conspiracies” into the mix.  

Link Posted: 2/28/2024 9:24:05 AM EDT
[#35]
AI is just parroting what is taught in schools, what else can it say?
Link Posted: 2/28/2024 9:26:00 AM EDT
[#36]
Gemini, show me an image of a white phone

Link Posted: 2/28/2024 9:28:01 AM EDT
[#37]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Colt1860:
The issue starts with the fact that it's not AI.  Unfortunately the public is still not smart enough to understand that.
View Quote


Yep.

Most all AI is programmed to act a specific way…..which isn’t AI.
Link Posted: 2/28/2024 9:30:54 AM EDT
[#38]
Seems to be a lot of indians doing the needful at Google. They could give a shit about american history or white people either for that matter.
Link Posted: 2/28/2024 9:32:13 AM EDT
[#39]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Rheinmetall792:


If you get $50M, can I have a B&T APC308, please? 😇
View Quote

Done!
Link Posted: 2/28/2024 9:34:35 AM EDT
[#40]
IT'S A CONSPIRACY!

DON'T BELIEVE YOUR LYING EYES.


Yes, even if you reproduce the same thing over and over again through Google, your eyes are lying and there was no bias. It's all a set-up by right wing conspiracy nuts.
Link Posted: 2/28/2024 9:37:27 AM EDT
[#41]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By RRA_223:


When you have to make your company motto "don't be evil" it's like you're trying to tell yourself to not be what you know you are.

That's like telling your wife "don't be a slut" when she goes out to dinner as an encouragement.
View Quote
When you expand into a new line of business and the first thing you do is commit corporate espionage and stab your best friend in the back, anything else is not a surprise
Link Posted: 2/28/2024 9:38:52 AM EDT
[#42]
For one, gross, but look at the caption in the pic with the ginger lady.  Typical commie thought process of either the right people didn’t do it or we just didn’t go far enough.

Link Posted: 2/28/2024 9:45:31 AM EDT
[#43]
Entertaining they equate telling facts and truth as being right wing.

Yes, we have known this. Its entertaining they are saying it.
Link Posted: 2/28/2024 9:50:40 AM EDT
[#44]
Googles recent agreement to use Reddit content for AI learning should tell you all you need to know.

Exclusive: Reddit in AI content licensing deal with Google

Link Posted: 2/28/2024 9:54:52 AM EDT
[#45]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By doty_soty:
They’re really emphasizing the ‘inaccurate historical pictures’ part. It literally wouldn’t make a white person lol fuck, it wouldn’t even make vanilla ice cream without making it chocolate.

It’s all so tiresome.
View Quote


They're not even admitting it was inaccurate.

While the inclusion of people of color in Gemini images showed consideration of diversity, it suggested the company had failed to fully think through the different contexts in which users might seek to create images, said Margaret Mitchell, the former co-head of Google’s Ethical AI research group and chief ethics Scientist at the AI startup Hugging Face. A different consideration of diversity may be appropriate when users are searching for images of how they feel the world should be, rather than how the world in fact was at a particular moment in history.
Link Posted: 2/28/2024 9:55:09 AM EDT
[#46]
#GoogleIsEvil

Can you imagine if Google was racist against Blacks, Hispanics, or Asians and not just completely against whites?  
Link Posted: 2/28/2024 9:57:11 AM EDT
[#47]
Link Posted: 2/28/2024 10:06:57 AM EDT
[#48]
The Top G has spoken

Attachment Attached File
Link Posted: 2/28/2024 10:08:18 AM EDT
[Last Edit: Seadra_tha_Guineapig] [#49]
[b]Originally Posted By callgood:
Take the example of asking for an image of a nurse. Prompt engineering “can provide the model with additional words or context, such as updating and randomly rotating through prompts that use different qualifiers, such as ‘nurse, male’ and ‘nurse, female,’” according to the Meta white paper. That’s precisely what Google’s AI does when it is asked to generate images of people, according to people familiar — it may add a variety of genders or races to the original prompt without users ever seeing that it did, subverting what would have been a stereotypical output produced by the tool.

“It’s a quick technical fix,” said Fabian Offert, an assistant professor at the University of California, Santa Barbara, who studies digital humanities and visual AI. “It’s the least computationally expensive way to achieve some part of what they want.”

-more (much more) at link-


View Quote

those fucking bigots, what about the other 35 genders?
Link Posted: 2/28/2024 10:13:13 AM EDT
[#50]
Originally Posted By callgood:
Google tried using a technical fix to reduce bias in a feature that generates realistic-looking images of people. Instead, it set off a new diversity firestorm.
Link

https://www.ar15.com/media/mediaFiles/60489/Capture_JPG-3143697.JPG

February was shaping up to be a banner month for Google’s ambitious artificial intelligence strategy. The company rebranded its chatbot as Gemini and released two major product upgrades to better compete with rivals on all sides in the high-stakes AI arms race. In the midst of all that, Google also began allowing Gemini users to generate realistic-looking images of people.

Not many noticed the feature at first. Other companies like OpenAI already offer tools that let users quickly make images of people that can then be used for marketing, art and brainstorming creative ideas. Like other AI products, though, these image-generators run the risk of perpetuating biases based on the data they’ve been fed in the development process. Ask for a nurse and some AI services are more likely to show a woman; ask for a chief executive and you’ll often see a man.

??Within weeks of Google launching the feature, Gemini users noticed a different problem. Starting on Feb. 20 and continuing throughout the week, users on X flooded the social media platform with examples of Gemini refraining from showing White people — even within a historical context where they were likely to dominate depictions, such as when users requested images of the Founding Fathers or a German soldier from 1943. Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people.

Elon Musk, the owner of X, entered the fray, engaging with dozens of posts about the unfounded conspiracy, including several that singled out individual Google leaders as alleged architects of the policy. On Thursday, Google paused Gemini’s image generation of people.

The next day, Google senior vice president Prabhakar Raghavan published a blog post attempting to shed light on the company’s decision, but without explaining in depth why the feature had faltered.

Google’s release of a product poorly equipped to handle requests for historical images demonstrates the unique challenge tech companies face in preventing their AI systems from amplifying bias and misinformation — especially given competitive pressure to bring AI products to market quickly. Rather than hold off on releasing a flawed image generator, Google attempted a Band-Aid solution.

When Google launched the tool, it included a technical fix to reduce bias in its outputs, according to two people with knowledge of the matter, who asked not to be identified discussing private information. But Google did so without fully anticipating all the ways the tool could misfire, the people said, and without being transparent about its approach.

https://www.ar15.com/media/mediaFiles/60489/Capture_JPG-3143699.JPG

Google’s overcorrection for AI’s well-known bias against people of color left it vulnerable to yet another firestorm over diversity. The tech giant has faced criticisms over the years for mistakenly returning images of Black people when users searched for “gorillas” in its Photos app as well as a protracted public battle over whether it acted appropriately in ousting the leaders of its ethical AI team.

In acting so quickly to pause this tool, without adequately unpacking why the systems responded as they did, Googlers and others in Silicon Valley now worry that the company’s move will have a chilling effect. They say it could discourage talent from working on questions of AI and bias — a crucial issue for the field.

“The tech industry as a whole, with Google right at the front, has again put themselves in a terrible bind of their own making,” said Laura Edelson, an assistant professor at Northeastern University who has studied AI systems and the flow of information across large online networks. “The industry desperately needs to portray AI as magic, and not stochastic parrots,” she said, referring to a popular metaphor that describes how AI systems mimic human language through statistical pattern matching, without genuine understanding or comprehension. “But parrots are what they have.”

“Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable,” a spokesperson for Google said in a statement. “We’re continuing to quickly address instances in which the product isn’t responding appropriately.”

In an email to staff late on Tuesday, Google Chief Executive Officer Sundar Pichai said employees had been “working around the clock” to remedy the problems users had flagged with Gemini’s responses, adding that the company had registered “a substantial improvement on a wide range of prompts.”

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai wrote in the memo, which was first reported by Semafor. “No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”

Read more: Generative AI Takes Stereotypes and Bias From Bad to Worse

Googlers working on ethical AI have struggled with low morale and a feeling of disempowerment over the past year as the company accelerated its pace of rolling out AI products to keep up with rivals such as OpenAI. While the inclusion of people of color in Gemini images showed consideration of diversity, it suggested the company had failed to fully think through the different contexts in which users might seek to create images, said Margaret Mitchell, the former co-head of Google’s Ethical AI research group and chief ethics Scientist at the AI startup Hugging Face. A different consideration of diversity may be appropriate when users are searching for images of how they feel the world should be, rather than how the world in fact was at a particular moment in history.

“The fact that Google is paying attention to skin tone diversity is a leaps-and-bounds advance from where Google was just four years ago. So it’s sort of like, two steps forward, one step back,” Mitchell said. “They should be recognized for actually paying attention to this stuff. It’s just, they needed to go a little bit further to do it right.”

Google’s image problem
For Google, which pioneered some of the techniques at the heart of today’s AI boom, there has long been immense pressure to get image generation right. Google was so concerned about how people would use Imagen, its AI image-generation model, that it declined to release the feature to the public for a prolonged period after first detailing its capabilities in a research paper in May 2022.

Over the years, teams at the company debated over how to ensure that its AI tool would be responsible in generating photorealistic images of people, said two people familiar with the matter, who asked not to be identified relaying internal discussions. At one point, if employees experimenting internally with Google’s Imagen asked the program to generate an image of a human — or even one that implicitly included people, such as a football stadium — it would respond with a black box, according to one person. Google included the ability to generate images of people in Gemini only after conducting multiple reviews, another person said.

Google did not carry out testing of all the ways that the feature might deliver unexpected results, one person said, but it was deemed good enough for the first version of Gemini’s image-generation tool that it made widely available to the public. Though Google’s teams had acted cautiously in creating the tool, there was a broad sense internally that the company had been unprepared for this type of fallout, they said.

https://www.ar15.com/media/mediaFiles/60489/Capture_JPG-3143704.JPG

As users on X circulated images of Gemini’s ahistorical depictions of people, Google’s internal employee forums were ablaze with posts about the model’s shortcomings, according to a current employee. On Memegen, an internal forum where employees share memes poking fun at the company, one popular post featured an image of TV host Anderson Cooper covering his face with his hands.

“It’s a face palm,” the employee said. “There’s a sense that this is clearly not ready for prime time… that the company is in fact trying to play catch up.”

Google, OpenAI and others build guardrails into their AI products and often conduct adversarial testing — meant to probe how the tools would respond to potential bad actors — in order to limit potentially problematic outputs, such as violent or offensive content. They also employ a number of methods to counteract biases found in their data, such as by having humans rate the responses a chatbot gives. Another method, which some companies use for software that generates images, is to expand on the specific wording of prompts that users feed into the AI model to counteract damaging stereotypes — sometimes without telling users.

Two people familiar with the matter said Google’s image generation works in this way, though users aren’t informed of it. The approach is sometimes referred to as prompt engineering or prompt transformation. A recent Meta white paper on building generative AI responsibly explained it as “a direct modification of the text input before it is sent to the model, which helps to guide the model behavior by adding more information, context, or constraints.”

Take the example of asking for an image of a nurse. Prompt engineering “can provide the model with additional words or context, such as updating and randomly rotating through prompts that use different qualifiers, such as ‘nurse, male’ and ‘nurse, female,’” according to the Meta white paper. That’s precisely what Google’s AI does when it is asked to generate images of people, according to people familiar — it may add a variety of genders or races to the original prompt without users ever seeing that it did, subverting what would have been a stereotypical output produced by the tool.

“It’s a quick technical fix,” said Fabian Offert, an assistant professor at the University of California, Santa Barbara, who studies digital humanities and visual AI. “It’s the least computationally expensive way to achieve some part of what they want.”

-more (much more) at link-


View Quote


Why is it run by all dot indians?
Arrow Left Previous Page
Page / 4
Close Join Our Mail List to Stay Up To Date! Win a FREE Membership!

Sign up for the ARFCOM weekly newsletter and be entered to win a free ARFCOM membership. One new winner* is announced every week!

You will receive an email every Friday morning featuring the latest chatter from the hottest topics, breaking news surrounding legislation, as well as exclusive deals only available to ARFCOM email subscribers.


By signing up you agree to our User Agreement. *Must have a registered ARFCOM account to win.
Top Top