woman-tech
Tech

The future of AI: our journey towards sexists, racists robots

By

Gender bias is alive and well in artificial intelligence and machine learning. It’s time to speak about it. 

In my previous job, I contacted a founder to see whether he’d like to come over and talk to the start-ups participating in our acceleration program. He had an interesting story, with mistakes, failures, and successes, which I thought would be great to share with the young founders I was working with. He replied, said yes, and CC’ed his assistant, Amy, so that I could schedule a time with her. After a few emails back-and-forth, I realized that I was talking with a bot. It worked quite well: we booked a time on his calendar, he came to my office on that day and had a one-hour conversation with the start-ups.

Scheduling appointments was part of my job. This was such a mundane, automatic task for me. Not once did I stop to wonder why I had been talking with a female secretary bot. Why did she have to be named Amy?

Replicating inequalities through technology

There are tons of examples of gender and race bias in AI. Virtual assistants have always been given female names and voices: Apple has Siri, Microsoft has Cortana, and Amazon has Alexa. When Google is asked to translate a Turkish or Chinese gender-neutral pronoun into English, “he” will be attached to “hardworking”, while “she” is paired with “lazy”. Face-recognition software recognizes your face with an accuracy of 99% if you are a white man, and 37% if you are a woman with darker skin. A 2015 study showed that in a Google images search for “CEO”, only 11% of the people displayed were women.

I could go on forever. These stories are all over the web – like that time when Microsoft accidentally created a racist and sexist bot through Twitter, or when researchers at MIT turned a bot into a psychopath thanks to Reddit.

In popular culture, we think of robots as impartial, neutral: think of Viki in “I, Robot” or Samantha in “Her“. We like to believe that AI will be able to detect right from wrong. But how can we integrate moral values through machine learning when we cannot define and agree on those values ourselves?

Biased technology isn’t quite unique to artificial intelligence, but because of AI’s exponential rise and ever-growing place in the world, it must be addressed. The implications of adding societal concepts such as sexism and racism into machine learning are terrifying, as shown by Judith Spritz, founding program director of the WiTNY Initiative:

We are hurtling towards a time when our biology will be equal parts technology and physiology. Think about the implications for the human race if the technology that is destined to be the essence of who we are as a species is developed largely under the leadership and guidance of a single gender.” (Source)

Create diverse teams of scientists

Certainly, to solve this, we must recruit more balanced workforces. Naming a virtual assistant “Amy” or “Alexa” is a conscious decision, taken by real human beings doing their jobs. A diverse team of engineers and AI scientists could prevent companies from replicating gender and race bias in the products and services they offer. Furthermore, testing prototypes on a wide range of people might have stopped the release of VR headsets giving motion sickness to women, or soap dispensers not recognizing darker skins. In the end, companies benefit from diversity in their teams – not only on a communication/marketing level but also in the way their product is imagined and developed.

That is why ethicists are more important than ever. We need them to understand that it is our responsibility as a society to create products and services that do not replicate any bias. That is crucial, in order to remove human-based sexism and racism. But what about the automatically generated bias?

Detect and quantify the bias in data

Think about it this way: in our above example, nobody at Google decided that only 11% of results in the image search for “CEO” would be women. That is not a human-based mistake, but a problem with the data. And that is an extremely insidious issue with machine learning. In fact, most AIs are trained with algorithms using massive sets of data. Here’s an example to understand why this is particularly dangerous:

Last year, researchers used deep learning to identify skin cancer from photographs. They trained their model on a data set of 129,450 images, 60% of which were scraped from Google Images. But fewer than 5% of these images are of dark-skinned individuals, and the algorithm wasn’t tested on dark-skinned people. Thus the performance of the classifier could vary substantially across different populations.” (Source)

Sure, sometimes the problem is in the algorithm – such as Google Translate defaulting to the masculine pronoun. The person testing the data or creating the code can imprint their own views on the machine. This can be fixed by raising awareness on the issue of inequalities and detecting the gender and race bias. However, what can we do when the data we use to introduce AIs to the world is, in itself, sexist and racist?

Many ethical questions are raised through machine learning and artificial intelligence. There is no obvious answer to most of them. What is important to remember, however, is to stay critical of the tools we use on a daily basis. Ask yourself: do you really need to speak with a female virtual assistant?

If you’d like more resources on the topic, check out the links below:

You may also like

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: