Connect with us

Technology

Ethical AI Isn’t to Blame for Google’s Gemini Debacle

Published

on

/ 4463 Views

Earlier this month, Google released its long-awaited system "Gemini," giving users access to its AI image-generation Technology for the first time. While most early users agreed that the system was impressive, creating detailed images for text prompts in seconds, users soon discovered that it was difficult to get the system to generate images of white people, and soon viral tweets displayed head-scratching examples such as racially diverse Nazis.

Some people faulted Gemini for being "too woke," using Gemini as the latest weapon in an escalating culture war on the importance of recognizing the effects of historical discrimination. Many said it reflected a malaise inside Google, and some criticized the field of "AI ethics" as an embarrassment.

The idea that ethical AI work is to blame is wrong. In fact, Gemini showed Google wasn't correctly applying the lessons of AI ethics. Where AI ethics focuses on addressing foreseeable use cases– such as historical depictions–Gemini seems to have opted for a "one size fits all" approach, resulting in an awkward mix of refreshingly diverse and cringeworthy outputs.

I should know. I've worked on ethics in AI within technology companies for over 10 years, making me one of the most senior experts in the world on the matter (it's a young field!). I also founded and co-led Google's "Ethical AI" team, before they fired me and my co-lead following our report warning about exactly these kinds of issues for language generation. Many people criticized Google for their decision, believing it reflected systemic discrimination and a prioritization of reckless speed over well-considered strategy in AI. It is possible I strongly agree.

The Gemini debacle again laid bare Google's inexpert strategy in areas where I'm uniquely qualified to help, and which I can now help the public understand more generally. This piece will discuss some ways that AI companies can do better next time, avoiding giving the far-right unhelpful ammunition in culture wars, and ensuring that AI benefits as many people as possible in the future.

One of the critical pieces in operationalizing ethics in AI is to articulate foreseeable use, including malicious use and misuse. This means working through questions such as Once the model we're thinking of building is deployed, how will people use it? And how can we design it to be as beneficial as possible in these contexts? This approach recognizes the central importance of "context of use" when creating AI systems. This type of foresight and contextual thinking, grounded on the interaction of society and Technology, is harder for some people than others–here is where people with expertise in human-computer interaction, social Science, and cognitive Science are particularly skilled (speaking to the importance of interdisciplinarity in tech hiring). These roles tend not to be given as much power and iNFLuence as engineering roles, and my guess is that this was true in the case of Gemini: those most skilled at articulating foreseeable uses were not empowered, leading to a system that could not handle multiple types of appropriate use, such as the depiction of historically white groups.

Things go wrong when organizations treat all use cases as one use case, or don't model use cases at all. As such, without an ethics-grounded analysis of use cases in different contexts, AI systems may not have models "under the hood" that help to identify what the user is asking for (and whether it should be generated). For Gemini, this could involve determining whether the user is seeking imagery that's historic or diverse, and whether their request is ambiguous or malicious. We recently saw this same failure to build robust models for foreseeable use leading to the proliferation of AI-generated Taylor Swift pornography.

To assist, I years ago made the following chart. The task is to fill out the cells; I've filled it out today with a few examples relevant to Gemini specifically.

Credit Margaret Mitchell

The green cells (top row) are those where beneficial AI is most likely possible (not where AI will always be beneficial). The red cells (middle row) are those where harmful AI is most likely (but may also be where unforeseen beneficial innovation may occur). The rest of the cells are more likely to have mixed results – some outcomes good, some bad.

The next steps involve working through likely errors in different contexts, addressing disproportionate errors for subgroups subject to discrimination. The developers of Gemini seem to have gotten this part largely right. The team seems to have had the foresight to recognize the risk of overrepresenting white people in neutral or positive situations, which would amplify a problematic white-dominant view of the world. And so, there was likely a submodule within Gemini designed to show darker skin tones to users.

The fact that these steps are evident in Gemini, but not the steps involving foreseeable use, may be due in part to increased public awareness of bias in AI systems: a pro-white bias was an easily foreseeable PR nightmare, echoing the Gorilla Incident Google has become infamous for, whereas the nuanced approaches to handle "context of use" was not. The net effect was a system that "missed the mark" on being inclusive of foreseeable appropriate use cases.

The high-level point is that it is possible to have Technology that benefits users and minimizes harm to those most likely to be negatively affected. But you have to have people who are good at doing this included in development and deployment decisions. And these people are often disempowered (or worse) in tech. It doesn't have to be this way: We can have different paths for AI that empower the right people for what they're most qualified to help with. Where diverse perspectives are sought out, not shut down. To get there requires some rough work and ruffled feathers. We'll know we're on a good path when we start seeing tech executives that are as diverse as the images Gemini generates.

Trending