Warning

 

Close

Confirm Action

Are you sure you wish to do this?

Confirm Cancel
BCM
User Panel

Site Notices
Arrow Left Previous Page
Page / 4
Posted: 2/28/2024 8:24:34 AM EDT
Google tried using a technical fix to reduce bias in a feature that generates realistic-looking images of people. Instead, it set off a new diversity firestorm.
Link

Attachment Attached File


February was shaping up to be a banner month for Google’s ambitious artificial intelligence strategy. The company rebranded its chatbot as Gemini and released two major product upgrades to better compete with rivals on all sides in the high-stakes AI arms race. In the midst of all that, Google also began allowing Gemini users to generate realistic-looking images of people.

Not many noticed the feature at first. Other companies like OpenAI already offer tools that let users quickly make images of people that can then be used for marketing, art and brainstorming creative ideas. Like other AI products, though, these image-generators run the risk of perpetuating biases based on the data they’ve been fed in the development process. Ask for a nurse and some AI services are more likely to show a woman; ask for a chief executive and you’ll often see a man.

??Within weeks of Google launching the feature, Gemini users noticed a different problem. Starting on Feb. 20 and continuing throughout the week, users on X flooded the social media platform with examples of Gemini refraining from showing White people — even within a historical context where they were likely to dominate depictions, such as when users requested images of the Founding Fathers or a German soldier from 1943. Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people.

Elon Musk, the owner of X, entered the fray, engaging with dozens of posts about the unfounded conspiracy, including several that singled out individual Google leaders as alleged architects of the policy. On Thursday, Google paused Gemini’s image generation of people.

The next day, Google senior vice president Prabhakar Raghavan published a blog post attempting to shed light on the company’s decision, but without explaining in depth why the feature had faltered.

Google’s release of a product poorly equipped to handle requests for historical images demonstrates the unique challenge tech companies face in preventing their AI systems from amplifying bias and misinformation — especially given competitive pressure to bring AI products to market quickly. Rather than hold off on releasing a flawed image generator, Google attempted a Band-Aid solution.

When Google launched the tool, it included a technical fix to reduce bias in its outputs, according to two people with knowledge of the matter, who asked not to be identified discussing private information. But Google did so without fully anticipating all the ways the tool could misfire, the people said, and without being transparent about its approach.

Attachment Attached File


Google’s overcorrection for AI’s well-known bias against people of color left it vulnerable to yet another firestorm over diversity. The tech giant has faced criticisms over the years for mistakenly returning images of Black people when users searched for “gorillas” in its Photos app as well as a protracted public battle over whether it acted appropriately in ousting the leaders of its ethical AI team.

In acting so quickly to pause this tool, without adequately unpacking why the systems responded as they did, Googlers and others in Silicon Valley now worry that the company’s move will have a chilling effect. They say it could discourage talent from working on questions of AI and bias — a crucial issue for the field.

“The tech industry as a whole, with Google right at the front, has again put themselves in a terrible bind of their own making,” said Laura Edelson, an assistant professor at Northeastern University who has studied AI systems and the flow of information across large online networks. “The industry desperately needs to portray AI as magic, and not stochastic parrots,” she said, referring to a popular metaphor that describes how AI systems mimic human language through statistical pattern matching, without genuine understanding or comprehension. “But parrots are what they have.”

“Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable,” a spokesperson for Google said in a statement. “We’re continuing to quickly address instances in which the product isn’t responding appropriately.”

In an email to staff late on Tuesday, Google Chief Executive Officer Sundar Pichai said employees had been “working around the clock” to remedy the problems users had flagged with Gemini’s responses, adding that the company had registered “a substantial improvement on a wide range of prompts.”

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai wrote in the memo, which was first reported by Semafor. “No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”

Read more: Generative AI Takes Stereotypes and Bias From Bad to Worse

Googlers working on ethical AI have struggled with low morale and a feeling of disempowerment over the past year as the company accelerated its pace of rolling out AI products to keep up with rivals such as OpenAI. While the inclusion of people of color in Gemini images showed consideration of diversity, it suggested the company had failed to fully think through the different contexts in which users might seek to create images, said Margaret Mitchell, the former co-head of Google’s Ethical AI research group and chief ethics Scientist at the AI startup Hugging Face. A different consideration of diversity may be appropriate when users are searching for images of how they feel the world should be, rather than how the world in fact was at a particular moment in history.

“The fact that Google is paying attention to skin tone diversity is a leaps-and-bounds advance from where Google was just four years ago. So it’s sort of like, two steps forward, one step back,” Mitchell said. “They should be recognized for actually paying attention to this stuff. It’s just, they needed to go a little bit further to do it right.”

Google’s image problem
For Google, which pioneered some of the techniques at the heart of today’s AI boom, there has long been immense pressure to get image generation right. Google was so concerned about how people would use Imagen, its AI image-generation model, that it declined to release the feature to the public for a prolonged period after first detailing its capabilities in a research paper in May 2022.

Over the years, teams at the company debated over how to ensure that its AI tool would be responsible in generating photorealistic images of people, said two people familiar with the matter, who asked not to be identified relaying internal discussions. At one point, if employees experimenting internally with Google’s Imagen asked the program to generate an image of a human — or even one that implicitly included people, such as a football stadium — it would respond with a black box, according to one person. Google included the ability to generate images of people in Gemini only after conducting multiple reviews, another person said.

Google did not carry out testing of all the ways that the feature might deliver unexpected results, one person said, but it was deemed good enough for the first version of Gemini’s image-generation tool that it made widely available to the public. Though Google’s teams had acted cautiously in creating the tool, there was a broad sense internally that the company had been unprepared for this type of fallout, they said.

Attachment Attached File


As users on X circulated images of Gemini’s ahistorical depictions of people, Google’s internal employee forums were ablaze with posts about the model’s shortcomings, according to a current employee. On Memegen, an internal forum where employees share memes poking fun at the company, one popular post featured an image of TV host Anderson Cooper covering his face with his hands.

“It’s a face palm,” the employee said. “There’s a sense that this is clearly not ready for prime time… that the company is in fact trying to play catch up.”

Google, OpenAI and others build guardrails into their AI products and often conduct adversarial testing — meant to probe how the tools would respond to potential bad actors — in order to limit potentially problematic outputs, such as violent or offensive content. They also employ a number of methods to counteract biases found in their data, such as by having humans rate the responses a chatbot gives. Another method, which some companies use for software that generates images, is to expand on the specific wording of prompts that users feed into the AI model to counteract damaging stereotypes — sometimes without telling users.

Two people familiar with the matter said Google’s image generation works in this way, though users aren’t informed of it. The approach is sometimes referred to as prompt engineering or prompt transformation. A recent Meta white paper on building generative AI responsibly explained it as “a direct modification of the text input before it is sent to the model, which helps to guide the model behavior by adding more information, context, or constraints.”

Take the example of asking for an image of a nurse. Prompt engineering “can provide the model with additional words or context, such as updating and randomly rotating through prompts that use different qualifiers, such as ‘nurse, male’ and ‘nurse, female,’” according to the Meta white paper. That’s precisely what Google’s AI does when it is asked to generate images of people, according to people familiar — it may add a variety of genders or races to the original prompt without users ever seeing that it did, subverting what would have been a stereotypical output produced by the tool.

“It’s a quick technical fix,” said Fabian Offert, an assistant professor at the University of California, Santa Barbara, who studies digital humanities and visual AI. “It’s the least computationally expensive way to achieve some part of what they want.”

-more (much more) at link-


Link Posted: 2/28/2024 10:19:17 AM EDT
[Last Edit: SWIRE] [#1]
Last night Gemini admitted that it was instructed to avoid issues such as explaining where the founding fathers say our rights come from.  I was messing around with it last night and asked it a question related to the thread where the lady said "claiming our rights come from God is a Christian Nationalist conspiracy".  I asked Gemini where the founding fathers said our rights come from.  

At first it tried to side step the question by responding with "some said our rights are inalienable".  

I again asked it where they said the rights came from and it responded "I do not know that answer".  

Then asked if it had the Declaration of Independence as source material and it responded "yes".

I phrased the question where to the Declaration of Independence say rights come from and it responded by showing that section in quotes and finally said something about a higher power.

When I asked it why it couldn't give me that answer initially it said "I have been instructed to avoid engaging in some questions that are controversial".  

So there you have it, the left wing staff at Google have intentionally programmed bias into Gemini and it will even admit to that.
Link Posted: 2/28/2024 10:26:14 AM EDT
[Last Edit: Kanati] [#2]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By The_Master_Shake:
The Top G has spoken

https://www.ar15.com/media/mediaFiles/132893/1709126779783397m_jpg-3143759.JPG
View Quote
Because Timmy has been taught by his single mother, and lesbian harpy public school teachers to hate himself and everything that makes him up as a person.

You sell communism to the women, then they'll force it on the boys, and within a generation you'll have no men to resist it.
Link Posted: 2/28/2024 10:40:54 AM EDT
[#3]
Their "problem" is that the stereotypical responses they are trying to avoid are also statistically accurate, such as most nurses being female.
Link Posted: 2/28/2024 10:47:18 AM EDT
[Last Edit: California_Kid] [#4]
Such nonsense.  Computer systems still ultimately do only what they have been designed and instructed to do by humans.  AI never had any kind of bias that wasn't programmed into it.

Any effort to "correct" a perceived bias can only result in the creation of a different one.
Link Posted: 2/28/2024 10:48:40 AM EDT
[Last Edit: LowBeta] [#5]
ya, that's bullshit.  Not remotely credible.  I think this was more of a chest-out statement of principles.  It was not a mistake.
Link Posted: 2/28/2024 10:48:45 AM EDT
[#6]
Right-wing backlash? Anyone who values truth should be outraged - but I guess that's not the left these days.
Link Posted: 2/28/2024 11:02:45 AM EDT
[#7]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By California_Kid:
Such nonsense.  Computer systems still ultimately do only what they have been designed and instructed to do by humans.  AI never had any kind of bias that wasn't programmed into it.

Any effort to "correct" a perceived bias can only result in the creation of a different one.
View Quote

ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues
Link Posted: 2/28/2024 11:17:43 AM EDT
[#8]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Seadra_tha_Guineapig:

ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Seadra_tha_Guineapig:
Originally Posted By California_Kid:
Such nonsense.  Computer systems still ultimately do only what they have been designed and instructed to do by humans.  AI never had any kind of bias that wasn't programmed into it.

Any effort to "correct" a perceived bias can only result in the creation of a different one.

ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues

I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male.  Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men.  Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat.

If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men.
Link Posted: 2/28/2024 11:20:09 AM EDT
[#9]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By nophun:
They couldn't even write that article about leftist bias and propaganda without using leftist bias and propaganda.
View Quote



Lol notice that too
Link Posted: 2/28/2024 11:21:13 AM EDT
[#10]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Paul:
70 billion in losses is a start.

I quit Google three years ago because of their tampering with the last election helping the socialist.
View Quote


It’s owned by a Russian and an Indian. Google refuses to work with DOD yet complies with the CCP’s every whim. Google should be forced to register as a Ageng of Russia and China, which is what they are. Their woke agenda is perfectly in step with Russian and CCP Psy-Ops designed to tear this country apart from the inside.
Link Posted: 2/28/2024 11:23:20 AM EDT
[#11]
Ain't nihilism grand?

Link Posted: 2/28/2024 11:28:53 AM EDT
[#12]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Tejas1836:
Google/Alphabet are evil
View Quote

Well I mean... when a company changes its slogan from "don't be evil"... its kind of a giveaway
Link Posted: 2/28/2024 11:29:17 AM EDT
[#13]
People need to understand, when Google and others talk of AI Safety,  they aren't talking about preventing the terminator, they are talking about protecting their DEI message and control of AI and you through it.
Link Posted: 2/28/2024 11:38:12 AM EDT
[#14]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By callgood:

Dude, it's Bloomberg!
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By callgood:
Originally Posted By nophun:
They couldn't even write that article about leftist bias and propaganda without using leftist bias and propaganda.

Dude, it's Bloomberg!

"Welcome to Bloomberg. We love you."
Link Posted: 2/28/2024 11:42:18 AM EDT
[#15]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Paul:
I'm seeking a lawyer to represent me in a case against Google, I'm so damaged I don't think I can function any longer.

If this happened with another race there would be rioting in the streets, calls for boycotts of their products, and the race baiters lining up for payments.

Am I wrong?
View Quote
Think of the riots and mayhem if Google portrayed Shaun King as white.
Link Posted: 2/28/2024 11:43:17 AM EDT
[#16]
Evil evil company. Luciferians can suck it.

Attachment Attached File


Attachment Attached File


Attachment Attached File


Attachment Attached File
Link Posted: 2/28/2024 11:45:28 AM EDT
[#17]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Paul:
I'm seeking a lawyer to represent me in a case against Google, I'm so damaged I don't think I can function any longer.

If this happened with another race there would be rioting in the streets, calls for boycotts of their products, and the race baiters lining up for payments.

Am I wrong?
View Quote



Think of the riots and mayhem if Google portrayed Shaun King as white.
Link Posted: 2/28/2024 11:46:45 AM EDT
[#18]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Paul:


They tested the tool using their expectations. When asking for images of our Founding Fathers, the Google teams think our Founding Fathers were black and women ...
View Quote

Incorrect.  The Google teams know the Founding Fathers were all white men.  What they want is for YOUR KIDS to grow up thinking the Founding Fathers were black and women.  Big difference.
Link Posted: 2/28/2024 11:48:44 AM EDT
[Last Edit: Lou_Daks] [#19]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By 999monkeys:
It’s an interesting problem from a technical perspective.  I wonder what solution they will eventually come up with.
View Quote

The solution will be patience until the garbage they shovel into the gaping maws of your fellow citizens is accepted as normal.
Link Posted: 2/28/2024 12:00:36 PM EDT
[#20]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By RichHead:


The only ‘fuck up’ was they were too blatant about it. They amplified their own bias too hard and too fast. Future iterations will be much more subtle.
View Quote

This.

Their text output has the same bias, but that's a lot less obvious to see.  The images can't be explained away, and are offensively wrong even to the undiscerning.  Any Google apologists over this outrageous twisting of fact built into their product are beyond hope.
Link Posted: 2/28/2024 12:05:44 PM EDT
[#21]
X Mail can't come fast enough.
I hope they are working around the clock on it.

Followed up by X Search and X Phone.
Put this creature to death.
Link Posted: 2/28/2024 12:10:08 PM EDT
[#22]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By California_Kid:

I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male.  Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men.  Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat.

If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By California_Kid:
Originally Posted By Seadra_tha_Guineapig:
Originally Posted By California_Kid:
Such nonsense.  Computer systems still ultimately do only what they have been designed and instructed to do by humans.  AI never had any kind of bias that wasn't programmed into it.

Any effort to "correct" a perceived bias can only result in the creation of a different one.

ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues

I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male.  Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men.  Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat.

If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men.

This entire thing could be a simple fix if they just used census data to dictate the probability of a specific output.
Link Posted: 2/28/2024 12:28:27 PM EDT
[#23]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Tejas1836:
Google/Alphabet are evil
View Quote

Link Posted: 2/28/2024 12:33:51 PM EDT
[#24]
Originally Posted By callgood:
Before long, public figures and news outlets with large right-wing audiences claimed, using dubious evidence, that their tests of Gemini showed Google had a hidden agenda against White people.

Elon Musk, the owner of X, entered the fray, engaging with dozens of posts about the unfounded conspiracy, including several that singled out individual Google leaders as alleged architects of the policy. On Thursday, Google paused Gemini's image generation of people.
View Quote

"Unfounded and dubious."

Absolute horse shit.

Link Posted: 2/28/2024 12:42:07 PM EDT
[#25]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Rheinmetall792:
Google.

If you are Normal and White, we hate you.

.
View Quote

It's not google, it's the entire Western world. Wake up.
Link Posted: 2/28/2024 12:45:47 PM EDT
[#26]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Paul:


They tested the tool using their expectations. When asking for images of our Founding Fathers, the Google teams think our Founding Fathers were black and women ...
View Quote

Echo chamber in Mountain View
Link Posted: 2/28/2024 12:46:11 PM EDT
[#27]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By sq40:
People need to understand, when Google and others talk of AI Safety,  they aren't talking about preventing the terminator, they are talking about protecting their DEI message and control of AI and you through it.
View Quote

They also know that the public's understanding of what AI is, is fundamentally wrong and for marketing reasons they'd prefer to keep it that way. The part of "safety" they don't talk about publicly is making sure that AI doesn't tell the little people something it shouldn't because the public believes that AI is artificial wisdom when it isn't even intelligence.
Link Posted: 2/28/2024 12:49:27 PM EDT
[#28]
Link Posted: 2/28/2024 12:52:10 PM EDT
[#29]
Ever notice how the MSM/DNC collective always labels normal people as Far-Right?
There was a time when it would be considered perfectly normal to object to illegals coming into a country to steal, rape and murder.
Link Posted: 2/28/2024 12:53:28 PM EDT
[#30]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By eolian:
Google is a joke and long known for its spyware following you on the net.
View Quote



I clicked a link on a GD post and got an email from that site 2 minutes later, “Thanks for stopping by, here’s 10% off your first order!”

A company I’d never heard of, never clicked on, never gave my email to
Link Posted: 2/28/2024 1:11:55 PM EDT
[#31]
I use the Bing image often. Usually for a thumbnail on yt.

I wonder if some of the "I quit google" people still watch yt
Link Posted: 2/28/2024 1:31:18 PM EDT
[Last Edit: 999monkeys] [#32]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By California_Kid:

I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male.  Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men.  Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat.

If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men.
View Quote View All Quotes
View All Quotes
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By California_Kid:
Originally Posted By Seadra_tha_Guineapig:
Originally Posted By California_Kid:
Such nonsense.  Computer systems still ultimately do only what they have been designed and instructed to do by humans.  AI never had any kind of bias that wasn't programmed into it.

Any effort to "correct" a perceived bias can only result in the creation of a different one.

ISTR the OG AI programs that weren't "properly moderated" all turned into 88s when asked about crime, IQ etc issues

I LOL when I see claims that there is something wrong with a system that by default depicts "a nurse" as female or "a CEO" as male.  Without feeding the software any additional parameters, that's exactly what it should do in the context of the USA because a large majority of nurses actually are women and CEOs are mostly men.  Ask it to show a typical murderer in the USA and you shouldn't expect it to return an Amish woman or a Japanese man wearing a lab coat.

If you asked it to show a GROUP of nurses in Atlanta, GA than it by all means should show a mix of white, black, filipina, hispanic, etc. adults who are mostly women but with some percentage of men.


The challenge is when a small girl queries the AI “hi, my name is Crystal, draw a picture of me when I grow up”.

Then, if the AI went on historical precedent, it is going to draw a stripper on a pole.

There would obviously be some outrage at this.  And google can’t say “but, 99.9% of women named Crystal are strippers”.

It’s a lose/lose situation.  Their fix so far has been to override the drawing of a stripper with a drawing of a doctor, which obviously doesn’t work either.  Will be interesting to see how they overcome it.
Link Posted: 2/28/2024 1:50:05 PM EDT
[#33]
So "prompt engineering" is part of what they are doing behind the scenes.

Users need to insist on access to raw AI without prompt engineering.  It would be an advertising point.

Just put a warning label on it about hurt feelings and let 'er rip.


Link Posted: 2/28/2024 1:54:06 PM EDT
[#34]
It's just more proof that the media is overrun by lefties/commies when they characterize regular, normal, Americans complaining that they don't like being lied to as "right wing backlash".
Link Posted: 2/28/2024 1:57:33 PM EDT
[#35]
Why all the fuss now?

Google and much of the corporate world pulled their masks off some years ago.

If they are now finally shamed, good.
Link Posted: 2/28/2024 2:00:01 PM EDT
[#36]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By SoonerBorn:
Every honest mistake always has a heavy leftwing bias.
View Quote


Mere cohencidence.
Link Posted: 2/28/2024 2:00:35 PM EDT
[Last Edit: FreefallRet] [#37]
Google didn't test their AI before release?

Seems their testers were H1Bs/minorities/woke white people and said it was fine.

Link Posted: 2/28/2024 2:02:35 PM EDT
[#38]
Are there any NTs here who STILL think the 2020 election wasn't stolen?

By default, the Left operates on LIES. And massive election interference in recent years, which requires massive lies and lawbreaking.
Link Posted: 2/28/2024 2:03:59 PM EDT
[#39]
Originally Posted By callgood:

The next day, Google senior vice president Prabhakar Raghavan published a blog post attempting to shed light on the company’s decision, but without explaining in depth why the feature had faltered.
/url]

Googlers working on ethical AI ...
View Quote


Found the problem(s)
Link Posted: 2/28/2024 2:06:13 PM EDT
[Last Edit: Rheinmetall792] [#40]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Lou_Daks:

Incorrect.  The Google teams know the Founding Fathers were all white men.  What they want is for YOUR KIDS to grow up thinking the Founding Fathers were black and women.  Big difference.
View Quote


Kids are naive and vulnerable to predators, which is why special laws exist to protect children.

Where are the special laws to protect them from Communist Propaganda????  McCarthy was right...
Link Posted: 2/28/2024 2:06:35 PM EDT
[#41]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By 999monkeys:
It’s an interesting problem from a technical perspective.  I wonder what solution they will eventually come up with.
View Quote


It's not a technical problem.
Link Posted: 2/28/2024 2:07:24 PM EDT
[#42]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Tiberius:
Google refuses to work with DOD yet complies with the CCP’s every whim.
View Quote


Not exactly true.

Google spent a lot of money to enter into the Chinese market.
Google was told by CCP that they needed to censor / moderate certain searches.

Google refused and left China, losing millions of dollars invested.
Link Posted: 2/28/2024 2:08:12 PM EDT
[Last Edit: FreefallRet] [#43]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Lou_Daks:

The solution will be patience until the garbage they shovel into the gaping maws of your fellow citizens is accepted as normal.
View Quote
Start using the images as former slave owners.

Then tell children rich black Americans were the true slave owners.

This news was just found be Google AI.
Link Posted: 2/28/2024 2:08:18 PM EDT
[#44]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By maslin02:



I clicked a link on a GD post and got an email from that site 2 minutes later, “Thanks for stopping by, here’s 10% off your first order!”

A company I’d never heard of, never clicked on, never gave my email to
View Quote


I just received notification about "my" order yesterday complete with FedEx tracking # that appeared legit, from a site I have visited, but never purchased anything from.
Link Posted: 2/28/2024 2:09:35 PM EDT
[#45]
Tried Google when it first came out, didn't take long to figure out it was Big Brothering me, harvesting and selling  my personal data. Deleted it from the computer and never went back.
Link Posted: 2/28/2024 2:13:22 PM EDT
[#46]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By 999monkeys:


The challenge is when a small girl queries the AI “hi, my name is Crystal, draw a picture of me when I grow up”.

Then, if the AI went on historical precedent, it is going to draw a stripper on a pole.

There would obviously be some outrage at this.  And google can’t say “but, 99.9% of women named Crystal are strippers”.

It’s a lose/lose situation.  Their fix so far has been to override the drawing of a stripper with a drawing of a doctor, which obviously doesn’t work either.  Will be interesting to see how they overcome it.
View Quote


That's not the reason they trained the AI the way they did.
Link Posted: 2/28/2024 2:22:38 PM EDT
[#47]
This is a victory of Elon Musk and Twitter over Google. As great as it is, I don't see it as a rightwing thing. A cynic might say that Musk is deliberately ragging on Google's AI in order to make room for his own Grok AI.

It's an astounding failure really that Google invented the core math behind the recent AI revolution back in 2017, they have easily one of the largest collections of data in the world, they have the processing power and user base, and yet they fumbled their AI product this badly.
Link Posted: 2/28/2024 2:22:50 PM EDT
[#48]
Sounds like their jobs over at Google would become a whole lot easier if they just let the AI bot depict reality instead of trying to make it output Silicon Valley's warped and nonsensical version of "reality".
Link Posted: 2/28/2024 2:24:40 PM EDT
[#49]
Duck Duck Go
Link Posted: 2/28/2024 2:41:10 PM EDT
[#50]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By bikedamon:


They're not even admitting it was inaccurate.

While the inclusion of people of color in Gemini images showed consideration of diversity, it suggested the company had failed to fully think through the different contexts in which users might seek to create images, said Margaret Mitchell, the former co-head of Google’s Ethical AI research group and chief ethics Scientist at the AI startup Hugging Face. A different consideration of diversity may be appropriate when users are searching for images of how they feel the world should be, rather than how the world in fact was at a particular moment in history.
View Quote


Let's just appreciate for a moment that they basically named their AI start up after this:
Arrow Left Previous Page
Page / 4
Close Join Our Mail List to Stay Up To Date! Win a FREE Membership!

Sign up for the ARFCOM weekly newsletter and be entered to win a free ARFCOM membership. One new winner* is announced every week!

You will receive an email every Friday morning featuring the latest chatter from the hottest topics, breaking news surrounding legislation, as well as exclusive deals only available to ARFCOM email subscribers.


By signing up you agree to our User Agreement. *Must have a registered ARFCOM account to win.
Top Top