Timnit Gebru on her firing from Google, the dangers of AI and big tech's bias

0080d8873bdda8f1579f68bfead20d03.jpeg

81b2678f998d5ba994b4f66c8256f584.png

" Artificial intelligence affects people all over the world, but people have no say in how to shape it " - Timnit Gebru.

128a66d369696ee91250b02aeaa3039d.jpeg▲ Photo: Winni Wintermeyer/The Guardian

"It feels like a gold rush," says Timnit Gebru. "In fact, it's a gold rush . A lot of people who make money are not really involved. But it's human beings who decide whether this should be done. We should remember that we have the power to make those decisions."

Gebru was talking about her field of expertise: artificial intelligence . On the day we spoke via video call, she was in Kigali, Rwanda, preparing to lead a workshop and lead a panel conversation at an international conference on artificial intelligence. The conference focused on the massive growth in AI capabilities and a fact that is often overlooked in the frantic discussions about AI: Many AI systems are likely to be built on massive biases, inequalities, and power imbalances .

This is the first time that the International Conference on Learning  Representation ( ICLR) has been held in an African country (Translator's Note: The previous conference was almost always held in developed economies in Europe and the United States) - a strong illustration of the importance of large technology companies in the South. State neglect . When Gebru talks about " artificial intelligence affecting people all over the world, but people have no say in how to shape it ," her backstory highlights this issue even more starkly.

1803c9a522f75bb238a2606feef653d3.jpeg

Growing up in Ethiopia, Gebru became a refugee as a teenager when war broke out between Ethiopia and Eritrea, where her parents were born. After a year in Ireland, she moved to the suburbs of Boston, Massachusetts, from where she enrolled at Stanford University in Northern California, opening the door to a career at the cutting edge of the computer industry: first at Apple, then at Microsoft, and finally Google . But at the end of 2020, her time at Google came to an abrupt end .

As the technical co-lead of Google's small ethical AI team, Gebru co-authored an academic paper that warns that now that AI is increasingly integrated into our lives, internet searches and users are clearly Recommending takes new levels of sophistication and threatens to master human talents like writing, composing music, and analyzing images. The obvious harm from this, the paper says, is that this so-called "intelligence" is based on vast data sets that "overrepresent hegemonic views and implant biases that can be damaging to marginalized populations." More bluntly, AI has the potential to deepen the dominance of white, male, relatively affluent, and American- and European-centric ways of thinking .

In response, senior Google executives asked Gebru to either retract the paper or remove her and her colleagues' names from it. This set off a chain of events that led to her departure. Google said she resigned, but Gebru insisted she was fired.

She said all this told her that big tech companies are consumed by the drive to develop artificial intelligence , "You don't want people like me in your way. I think that shows very clearly that unless there is external pressure to do different Otherwise companies won’t regulate themselves . We need regulation, we need something better than a pure profit motive .”

cdeefca992c4c1c617cb77160e298812.jpeg▲ Picture: Gebru speaks at the 2018 Technology Crisis Disruption Conference

Photo by: Kimberly White/Getty Images for Tech Crisis

Gebru, 40, sometimes speaks with dizzying speed, lest our conversation time fail to capture the rich details of her life. She tends to use the precise, measured vocabulary of tech insiders with a sense of the absurd that centers on a particularly poignant irony: that an industry filled with liberal, self-consciously progressive views seems to Often pushing the world in the opposite direction of these beliefs .

A recurring theme is racism, including her own experiences with prejudice in the U.S. education system and in Silicon Valley. She says that when she was a high school student in Massachusetts, a teacher told her bluntly about her science aptitude: "I've met a lot of people like you who think they can come here from other countries and take hardest course". This direct passive-aggression implies implicit denial or doubt: Despite her high grades in physics, her request to study the subject further was met with concern that she might find it too difficult.

"As an immigrant, the free form of racism confuses me very much," she said. "People who sound like they really care about you, but they're like: 'Don't you think this is going to be hard for you?' It took me a while to really figure out what was going on."

37d47fc6686789034ff64f7652b83109.jpeg

A more blatantly biased experience occurred later. She was attacked in a bar with a friend (a black woman). "That was the most horrific experience I've ever had in America," she said. “It was in San Francisco—another liberal place. I was attacked, but no one came to help me. It was horrible to see that: you were strangled, and people passing by Just staring at you indifferently."

She called the police. "It was worse than not calling them because at first they accused me of lying multiple times and kept telling me to calm down. Then they handcuffed my friend even though she had just been attacked." Her friend later detained at the police station.

At Stanford, while some of her white classmates often asked her condescendingly if she was admitted because of an affirmative action program, her undergraduate years were spent in an environment where seniors would at least "talk about race a lot " Diversity, and different people from different places ." After working as an audio engineer at Apple (2005-2007), she returned to Stanford for her Ph.D. and had a very different experience.

She said her life became "going to the office with the same group of people every day — it was kind of like work. There was no one who looked like me. It was just overwhelming."

6cc5f0bb241e9769c654637aabc9a8e7.jpeg▲Photo: Winnie Wintermere/The Guardian

Gebru: "I'm not worried about machines taking over the world, I'm worried about groupthink, narrow-mindedness and arrogance in the AI ​​community . "

Gebru began focusing on cutting-edge artificial intelligence, creating a system that showed how data on car ownership in a given neighborhood could highlight disparities related to race, crime rates, electoral behavior and income levels. In retrospect, this type of work might seem like the cornerstone of technologies that could obscure the creation of automated surveillance and law enforcement, but Gebru admits that “none of those bells were ringing in my head. It wasn’t until later that I realized the relationship between technology and multiracial and oppressive issues.” the relationship between".

Soon, though, she began to think deeply about how the innovations of big tech companies reflected the inequalities in their offices, labs and social life. In 2015, Google had to apologize for an incident in which its camera app (artificial intelligence system) misidentified a black couple as gorillas. The following year, ProPublica, a think-tank, found that software used across the United States to assess prison inmates' likelihood of recidivism grossly discriminated against blacks. At the same time, Gebru has become increasingly aware that underlying these stories are problems in the culture of the tech industry .

Around this time, she was at a big AI conference in Montreal, and at a Google party, a group of white men blatantly harassed her. "One guy forced a kiss on me, another guy took a picture. I was a little bit stunned: I didn't do anything. They had a party at an academic conference with unlimited bar drinks and they didn't even make it clear that it was a professional event .Obviously, you should never harass a woman or anyone in this way. But at these meetings, similar behavior is common." The organizers of the meeting said they later "created" a code of conduct, which they now have "A new closely monitored one-stop point of contact for concerns and complaints".

The following year, Gebru went out of his way to count the number of other black attendees at the same event. She found that out of 8,500 delegates, only six were people of color. In response, she posted a now-prescient Facebook post: " I'm not worried about machines taking over the world. I'm worried about groupthink, isolation and arrogance in the AI ​​community ."

In this context, it might seem surprising that Gebru found a new job at Google after a year at Microsoft's AI lab (with its relative fairness, accountability, transparency, and ethics). In 2018, thanks to the hiring of Margaret Mitchell, an expert in the field of algorithmic bias, she was hired to co-lead a team working on the ethics of artificial intelligence. "I was terrified," she said, "but I thought: 'Okay, Margaret Mitchell is here. We can work together. Who else can I work with? But that's how I got into it. I thought:' I wonder how long I can last here.'”

"It was a tough decision," she said. "Because when I go to Google, I hear from several women about sexual harassment and other types of harassment, even though when they get harassed they say, 'Don't do that.'" But it still doesn't help.

When Gebru joined the company, Google employees were vocally opposed to the company's involvement in Project Maven. The project uses AI to analyze surveillance footage captured by military drones (Google ended its involvement in 2018). Two months later, Google employees took part in a massive walkout over systemic racism, sexual harassment and gender inequality. Gebru said she was aware of the "high tolerance here for harassment and misconduct of all kinds".

4b07cb7d93de423ba39173ea0a7eb3c7.jpeg▲ In November 2018, Google employees in New York went on strike

Photograph: Brian R Smith/AFP/Getty Images

To highlight some of the ethical and political issues surrounding AI, her team hired Google's first social scientist. She and her colleagues are proud of the diversity of their small business and the things they bring to the company's attention, including issues related to Google's ownership of YouTube. A colleague from Morocco had a warning about a popular YouTube channel in that country called Chouf TV, “which is basically run by government intelligence and they use it to harass journalists and dissidents. Did nothing." (Google says we "need to review content to see if it violates our policies. But, in general, our harassment policy strictly prohibits threatening individuals, protracted or malicious harassment of someone based on intrinsic attributes. Content that insults or reveals personally identifiable information.”)

Then, in 2020, Gebru, Mitchell and two other colleagues wrote the paper that led to Gebru's departure. The paper is titled " On the Danger of Random Parrots: Are Language Models Too Big?" 》, the main point of contention is about artificial intelligence centered on so-called large language models, such as OpenAI's ChatGPT and Google's new PaLM 2. Roughly speaking, they leverage large amounts of data to perform complex tasks and generate content .

These sources, often drawn from the World Wide Web, inevitably contain content that is often copyrighted (for example, if an AI system can compose prose in the style of a particular writer, it is because it has absorbed most of his work). But Gebru and her co-authors have a more serious concern: that taking resources from the online world risks reproducing its worst aspects, from hate speech to those that exclude marginalized people and places . "In accepting a mass of online texts as 'representatives' of 'all' humanity , we risk perpetuating dominant views, exacerbating power imbalances and further exacerbating inequalities," they wrote.

When the paper was submitted for internal review, Gebru was quickly contacted by a Google vice president. She said Google raised some ambiguous objections, such as that she and her colleagues were too "negative" about AI. Google then asked Gebru to either retract the paper or remove the Google employee's name from it.

She told the company that papers would not be retracted, and authors' names would only be removed if Google explicitly objected. If not, she said, she would resign. She also sent multiple emails to women working in Google's AI division, saying the company was "silencing marginalized voices."

Then, in December 2020, while she was on leave, one of her closest colleagues texted her asking if an email they had seen saying she had left the company was true. Google later reported that "behavior was inconsistent with the company's expectations of Google managers."

I wonder how she felt at the time? "I'm not thinking. I'm just acting, like: 'I need a lawyer, I need to tell my story. I want to know what they're planning, I want to know what they're going to say about me.'" She pauses. "But I was fired. During my time off, on my way to visit my mom during the pandemic."

310be971c777319e3a6b2c338451cb8a.jpeg▲ The picture shows Google headquarters in Mountain View, California

Photo: Anadolu Agency/Getty Images

In response to Gebru's claims about workplace harassment and misconduct at Google, her experience at the Montreal meetup, and the nature of her departure, the company's press office emailed me a set of "background notes."

"We are committed to a safe, inclusive, and respectful workplace, and we take misconduct very seriously," Google said. "We have a strict anti-harassment and discrimination policy that thoroughly investigates all reported concerns and Allegations are taken with firm action. We also provide employees with multiple ways to report concerns."

Five years ago, the company overhauled "the way we handle and investigate employee concerns, introducing new care programs for employees who report concerns, and making arbitration optional for Google employees."

Regarding the use of copyrighted material by AI systems, a spokesperson said Google will "innovate in this area responsibly, ethically, and legally" and plans to "continue to work with publishers and the ecosystem to Collaborate and discuss to find ways for this new technology to help strengthen their work and benefit the entire web ecosystem."

After leaving Google, Gebru founded the Distributed AI Research Institute (Distributed AI Research, referred to as Dair), and devoted himself to it. "We have people involved in the US, the EU and Africa," she said. "We have social scientists, computer scientists, engineers, refugee advocates, labor organizers, activists ... it's a diverse group ."

396136c43ae021c7d146e0ae31f21f57.jpeg

She told me that fellows at the institute include a former Amazon delivery driver and others who have worked the tedious and sometimes harrowing job of manually flagging online content, including illegal and inappropriate, to Train AI systems. Much of this work takes place in developing countries. "There's a lot of exploitation in AI, and we want to bring it out into the open so people know what's going wrong," she said. "Also, AI isn't magic. There are a lot of people involved — 'humans.'"

At the same time, she also seeks to look beyond tech industry and media trends, focusing attention on fears of artificial intelligence taking over the planet and wiping out humanity , while questions about what the technology does and who it benefits and harms are ignored .

" This conversation attributes the initiative to the tool, not to the humans who build the tool ," she said. "It means you can focus the onus on: 'It's not my problem. It's the tool, it's so powerful that we don't know what it's going to do. Well, no, the problem is you. You're building something that has certain Features for profit. It's very distracting and takes the focus away from the real harm and what we need to do ."

How was she feeling when she confronted her old employer in Silicon Valley?

"I don't know if we're going to change them," she said. "We're never going to get a quadrillion dollars to do what we're doing. I just feel like we have to go all out. Maybe enough people do these little things and get organized and things change . That's me as desired.

Original link: https://www.theguardian.com/lifeandstyle/2023/may/22/there-was-all-sorts-of-toxic-behaviour-timnit-gebru-on-her-sacking-by-google-ais -dangers-and-big-techs-biases

Author丨John Harris (twitter: @johnharris1969)

Translator丨Wen Tao

Proofreading丨LsssY

Editor丨Li Nan

Related Reading| Related Reading

f39d45c0e951fa24df4cb2aa2ba44e5c.jpegHuggingFace's magic twist

2aadc9e4a12a1a05b749816277668957.jpeg                                       Countdown to 2 days|Come to Open Source Summer 2023 and submit your project application!

 

Introduction to Kaiyuanshe

Founded in 2014, Kaiyuan Society is composed of individual members who voluntarily contribute to the cause of open source. It is formed according to the principle of "contribution, consensus, and co-governance". It has always maintained the characteristics of vendor neutrality, public welfare, and non-profit. International integration, community development, project incubation" is an open source community federation with the mission. Kaiyuanshe actively cooperates closely with communities, enterprises and government-related units that support open source. With the vision of "Based in China and Contributing to the World", it aims to create a healthy and sustainable open source ecosystem and promote China's open source community to become an active force in the global open source system. Participation and Contributors.

In 2017, Kaiyuanshe was transformed into an organization composed entirely of individual members, operating with reference to the governance model of top international open source foundations such as ASF. In the past nine years, it has connected tens of thousands of open source people, gathered thousands of community members and volunteers, hundreds of lecturers at home and abroad, and cooperated with hundreds of sponsors, media, and community partners.

c95eab47cdca197478ba1224110f4e3c.gif

Guess you like

Origin blog.csdn.net/kaiyuanshe/article/details/131058656