What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing future of AI

From www.norsemathology.org

(Difference between revisions)
Jump to: navigation, search
m (Questions:)
m (Questions:)
Line 31: Line 31:
<li> The title is very provocative (which, one might argue, every title should be). In order to train anything, you need a training set: that is, you need to train something to recognize certain objects, or reach a conclusion, or.... But it has to be trained. There are two kinds of training: supervised and unsupervised. The argument here is that, if you leave an algorithm to reach its conclusions based on what it finds on the web, it's going to learn all the evils of the web! But if you allow humans to supervise their learning, the trainers build their own prejudices into the process. How can you win?
<li> The title is very provocative (which, one might argue, every title should be). In order to train anything, you need a training set: that is, you need to train something to recognize certain objects, or reach a conclusion, or.... But it has to be trained. There are two kinds of training: supervised and unsupervised. The argument here is that, if you leave an algorithm to reach its conclusions based on what it finds on the web, it's going to learn all the evils of the web! But if you allow humans to supervise their learning, the trainers build their own prejudices into the process. How can you win?
</li>
</li>
-
<li>
+
<li> While not exactly pertinent to the issue of AI, it was interesting years back when the [https://www.cambridge.org/core/books/gender-differences-in-mathematics/math-is-hard-barbie-1994-responses-of-threat-vs-challengemediated-arousal-to-stereotypes-alleging-intellectual-inferiority/204AA5C6127CD0C2F474C732432E052B "Math is Hard" Barbie] came out. How are stereotypes "baked in" to your area of interest? What have you observed?
 +
</li>
 +
<li> Examining the "source document": can you pick apart any particular mathematics you've encountered before, and how it's useful here? For any of you with any statistical training, you'll certainly notice there are a lot of p-values.... Can you imagine yourself performing any of these analyses? Can you imagine how you might make some of these tests with the training you already have?
</li>
</li>
<li> [https://plato.stanford.edu/entries/turing-test/ The Turing test] is a classic challenge, dealing with the question of whether machines can think. We decide that they are actually "thinking machines" if we can not distinguish between interactions with a real human and the machine. About this "Imitation game", Turing (1950) says:
<li> [https://plato.stanford.edu/entries/turing-test/ The Turing test] is a classic challenge, dealing with the question of whether machines can think. We decide that they are actually "thinking machines" if we can not distinguish between interactions with a real human and the machine. About this "Imitation game", Turing (1950) says:
<i>I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10<sup>9</sup>, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.</i> It's 70 years on: are we there? Are Siri and Alexa there?
<i>I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10<sup>9</sup>, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.</i> It's 70 years on: are we there? Are Siri and Alexa there?
-
</li>
 
-
<li> While not exactly pertinent to the issue of AI, it was interesting years back the "Math is Hard" Barbie came out.
 
</li>
</li>
</ol>
</ol>

Revision as of 05:09, 10 February 2021

Contents

Source and Background

There are two readings associated with this week's topic:


  • The first reading is very accessible: Arwa Mahdawi makes the case that "discrimination [is] being baked into tech". The people creating that tech? Mathematicians and computer scientists.

    Arwa's article has links to several articles which you might check out, such as

    Let me give you a very personal example. My wife (who is Togolese) is very dark-skinned; I am very light-skinned. When we Zoom (as we have been doing a lot during this pandemic), we sometimes use a virtual background; when we do, my wife is far more likely to "disappear" than I am. We jokingly accuse Zoom of racial bias, but IT'S NO JOKE. Similarly, when taking pictures in Africa with cameras that have an "auto focus" feature, many will zoom right in on a white person in sea of black folks. Now why is that? It's not chance....

  • The second reading is technical. I noticed that some of you were saying in the discussion that you wished that you knew more about the source material (for Global Warming, say). Well, be careful what you wish for!

    While technical, I think that you can read it for some sense of how these algorithms work, and for the means by which Arwa comes to her summary statistics.

    For those of you with an interest in statistics and computer science, there are some interesting technical aspects to this research; for those of you with an interest in education, you need to be aware of what your students might be doing -- and might be subjected to -- someday!

    And for those of you with an interest in programming, and, in particular, the statistical package R, you can find the code, images, etc. used to produce this paper at https://github.com/ryansteed/ieat.

Questions:

  1. The title is very provocative (which, one might argue, every title should be). In order to train anything, you need a training set: that is, you need to train something to recognize certain objects, or reach a conclusion, or.... But it has to be trained. There are two kinds of training: supervised and unsupervised. The argument here is that, if you leave an algorithm to reach its conclusions based on what it finds on the web, it's going to learn all the evils of the web! But if you allow humans to supervise their learning, the trainers build their own prejudices into the process. How can you win?
  2. While not exactly pertinent to the issue of AI, it was interesting years back when the "Math is Hard" Barbie came out. How are stereotypes "baked in" to your area of interest? What have you observed?
  3. Examining the "source document": can you pick apart any particular mathematics you've encountered before, and how it's useful here? For any of you with any statistical training, you'll certainly notice there are a lot of p-values.... Can you imagine yourself performing any of these analyses? Can you imagine how you might make some of these tests with the training you already have?
  4. The Turing test is a classic challenge, dealing with the question of whether machines can think. We decide that they are actually "thinking machines" if we can not distinguish between interactions with a real human and the machine. About this "Imitation game", Turing (1950) says: I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. It's 70 years on: are we there? Are Siri and Alexa there?

Your Answers:

Question 1:

Question 2:

Question 3:

Question 4:

Personal tools