What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing future of AI

From www.norsemathology.org

(Difference between revisions)
Jump to: navigation, search
m (New page: There are two readings associated with this week's topic: * [https://www.technologyreview.com/2021/01/29/1017065/ai-image-generation-is-racist-sexist/ An AI saw a cropped photo of AOC. It ...)
m
Line 1: Line 1:
There are two readings associated with this week's topic:
There are two readings associated with this week's topic:
-
* [https://www.technologyreview.com/2021/01/29/1017065/ai-image-generation-is-racist-sexist/ An AI saw a cropped photo of AOC. It autocompleted her wearing a bikini.] Image-generation algorithms are regurgitating the same sexist, racist ideas that exist on the internet.
 
* [https://www.theguardian.com/commentisfree/2021/feb/03/what-a-picture-of-alexandria-ocasio-cortez-in-a-bikini-tells-us-about-the-disturbing-future-of-ai What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing future of AI]: New research on image-generating algorithms has raised alarming evidence of bias. It’s time to tackle the problem of discrimination being baked into tech, before it is too late
* [https://www.theguardian.com/commentisfree/2021/feb/03/what-a-picture-of-alexandria-ocasio-cortez-in-a-bikini-tells-us-about-the-disturbing-future-of-ai What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing future of AI]: New research on image-generating algorithms has raised alarming evidence of bias. It’s time to tackle the problem of discrimination being baked into tech, before it is too late
 +
* [https://arxiv.org/pdf/2010.15052.pdf Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases]: When compared with statistical patterns in online image datasets, our findings suggest that machine learning models can automatically learn bias from the way people are stereotypically portrayed on the web.
-
[https://arxiv.org/pdf/2010.15052.pdf Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases]: When compared with statistical patterns in online image datasets, our findings suggest that machine learning models can automatically learn bias from the way people are stereotypically portrayed on the web.
+
The first reading is very accessible.

Revision as of 03:33, 10 February 2021

There are two readings associated with this week's topic:

The first reading is very accessible.

Personal tools