AI and Bias

As we discussed in earlier trends, a number of industry leaders and scientists believe the biggest risk tied to artificial intelligence is the downfall of humanity. But as DEI practitioners, we also need to be part of addressing two more tangible risks: AI created with harmful biases built into its core, and AI that does not reflect the diversity of the users it serves.  To do this, we need to expand AI talent pools and explicitly test AI-driven technologies for bias.

The tech industry remains very male and fairly culturally homogeneous. This lack of diversity is reflected in the products it produces. Recent research has shown that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.

https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals

Other research shows AI’s impact on People of Color and the Poor:

  • Facial recognition software has trouble identifying women of color.
  • Accents often trip up smart speakers like Alexa.
  • Software used widely to assess the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes.
  • When asked to respond to a threatening robot, humans are quicker to shoot black robots than white ones.
  • Amazon Rekognition (facial recognition software) falsely matched 28 members of Congress to mugshots; the people it wrongfully matched were disproportionately people of color.
  • AI facial recognition software also failed on iconic black women including Oprah Winfrey, Serena Williams, and Michelle Obama.

https://money.cnn.com/2018/07/23/technology/ai-bias-future/index.html

When it comes to AI, inclusion matters – from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its familiar biases and stereotypes.

As practitioners, it is critical that we call attention to the rights and needs of those who are so often excluded from the conversation. We need to encourage third party AI Audits and better training for those who create AI to understand their own implicit biases and worldview. We also need to be aware of efforts like the Toronto Declaration which draws from international human rights laws to argue that those who are discriminated against by artificial intelligence algorithms should have an avenue to seek reparations. We need to learn about Black Girls Code and Black in AI, Partnership on AI, AI Now Institute, Stanford’s University’s Human-Centered AI–and perhaps join the growing movement, AI for Good.

To continue your work on being Future Fluent, Google or Bing these groups to learn what is being done to address issues of AI and Bias.