What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing future of AI

From www.norsemathology.org

Revision as of 04:43, 10 February 2021 by Fubini (Talk | contribs)
Jump to: navigation, search

Contents

Source and Background

There are two readings associated with this week's topic:


  • The first reading is very accessible: Arwa Mahdawi makes the case that "discrimination [is] being baked into tech". The people creating that tech? Mathematicians and computer scientists.

    Arwa's article has links to several articles which you might check out, such as

    Let me give you a very personal example. My wife (who is Togolese) is very dark-skinned; I am very light-skinned. When we Zoom (as we have been doing a lot during this pandemic), we sometimes use a virtual background; when we do, my wife is far more likely to "disappear" than I am. We jokingly accuse Zoom of racial bias, but IT'S NO JOKE. Similarly, when taking pictures in Africa with cameras that have an "auto focus" feature, many will zoom right in on a white person in sea of black folks. Now why is that? It's not chance....

  • The second reading is technical. I noticed that some of you were saying in the discussion that you wished that you knew more about the source material (for Global Warming, say). Well, be careful what you wish for!

    While technical, I think that you can read it for some sense of how these algorithms work, and for the means by which Arwa comes to her summary statistics.

    For those of you with an interest in statistics and computer science, there are some interesting technical aspects to this research; for those of you with an interest in education, you need to be aware of what your students might be doing -- and might be subjected to -- someday!

Questions:

  1. The title is very provocative (which, one might argue, every title should be). In order to train anything, you need a training set: that is, you need to train something to recognize certain objects, or reach a conclusion, or.... But it has to be trained. There are two kinds of training: supervised and unsupervised. The argument here is that, if you leave an algorithm to reach its conclusions based on what it finds on the web, it's going to learn all the evils of the web! But if you allow humans to supervise their learning, they build their own prejudices into the process! How can you win?
  2. The Turing test is a classic challenge, dealing with the question of whether machines can think. We decide that they are actually "thinking machines" if we can not distinguish between interactions with a real human and the machine. About this "Imitation game", Turing (1950) says: I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. It's 70 years on: are we there? Are Siri and Alexa there?

Your Answers:

Question 1:

Question 2:

Question 3:

Question 4:

Personal tools