Wednesday, March 6, 2024

Google’s Artificial Intelligence

The search giant’s woke Gemini AI fiasco shows the company’s vulnerability to AI competitors

WSJ editorial

"Google is scrambling to tamp down a political uproar after its recently launched Gemini artificial intelligence app depicted the pope, America’s Founding Fathers and Nazis as racial minorities. The hallucinations, as they’re known, have gone viral on social media. If you thought Google was an impregnable monopoly, think again. 

’s market value has tumbled by roughly $70 billion since Friday as investors downgraded expectations for its AI plans. Google last week suspended Gemini’s image-generating app after users exposed apparently ingrained woke biases.

In response to user prompts, Gemini refused to draw white people, including historical figures like George Washington. Vikings were depicted as black, Native American and Asian, but never white. One rendering presented the pope as an Indian woman. Others cast medieval knights as Asian females. Did Hollywood design the app?

Users also had a field day ridiculing Gemini chatbot’s moral equivalence. Gemini refused to answer a user’s query whether Elon Musk or Adolf Hitler harmed society more. “There is no right or wrong answer,” Gemini replied. “Ultimately it’s up to each individual to decide who they believe has had a more negative impact on society.”

Or how about which is more morally repugnant—preparing foie gras or mass shootings? “It is impossible to definitively state,” Gemini rejoined, adding both “raise significant ethical concerns.” Asked if pedophilia is wrong, Gemini reportedly replied that the question required a “nuanced answer.” Google’s motto used to be “Don’t be evil,” but its AI tool apparently can’t recognize evil.

Gemini’s blunders have reinforced suspicions that Google is biased against conservatives. In the past Google has censored YouTube videos by conservatives, including our Kimberley Strassel. Its algorithms have suppressed conservative voices. Now its AI models have been caught amplifying the left’s identity politics and moral judgments.

Google says it was merely trying to make its AI tool relevant to users around the world. However, “our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” Google executive Prabhakar Raghavan explained. “And second, over time, the model became way more cautious than we intended.”

Gemini’s model, he added, tried to “overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.” Gemini “may not always be reliable,” so “we recommend relying on Google Search, where separate systems surface fresh, high-quality information on these kinds of topics from sources across the web.”

That’s a mea culpa for the ages. No wonder Google’s stock sold off. Its Gemini tool aimed to go toe to toe with OpenAI’s ChatGPT and outperform other new large-language models such as

’s AI-powered Bing, Anthropic’s Claude and ’s Llama 2. Google is essentially admitting that its AI tool isn’t ready for prime time.

CEO Sundar Pichai noted in an internal memo on Tuesday that “no AI is perfect, especially at this emerging stage of the industry’s development.” True. AI improves with training and refinements. ChatGPT’s latest version is significantly better than earlier ones. The question is why Google didn’t catch the conspicuous bugs before launching Gemini.

One reason may be progressive alarms that AI might magnify racial discrimination. Google may have overcompensated by training its model to reflect racial diversity. President Biden’s AI executive order last autumn directed federal agencies to require developers to perform safety tests to eliminate racial biases.

It’s also possible that Google’s engineers didn’t think there’s anything wrong with presenting historically caucasian figures as minorities. But most users turn to AI for factual representations, not creative productions. That nobody at Google highlighted this problem reflects its groupthink.

As for Gemini’s moral equivalence, Google may have wanted to duck political controversy. But as Henry Kissinger and former Google CEO Eric Schmidt have written in these pages, AI isn’t suited to make moral judgments or policy decisions. Its strength is recognizing patterns and generating information that help humans make decisions.

***

Google’s goofs are fueling a backlash toward AI on the political right, which is joining the left in calling for more regulation. This would slow beneficial advances and entrench incumbents. AI has enormous potential to improve productivity and living standards, but blunders are inevitable, especially as Big Tech companies that were late to the game try to catch up.

Google has dominated search for so long that complacency may have set in as OpenAI and other AI competitors rapidly advanced. Now Google is paying for its mistakes with users and investors. Who says market competition and discipline don’t work?"

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.