• kth.se
  • Student web
  • Intranet
  • kth.se
  • Student web
  • Intranet
Login
DA2210 HT20 (50708)
Homework 8a - Ethics
Skip To Content
Dashboard
  • Login
  • Dashboard
  • Calendar
  • Inbox
  • History
  • Help
Close
  • Min översikt
  • DA2210 HT20 (50708)
  • Assignments
  • Homework 8a - Ethics
  • Home
  • Assignments
  • Modules
  • Quizzes
  • Media Gallery
  • Course Evaluation

Homework 8a - Ethics

  • Due Nov 18, 2020 by 10pm
  • Points 1
  • Submitting a file upload
  • Available Nov 11, 2020 at 7pm - Dec 14, 2020 at 8am
This assignment was locked Dec 14, 2020 at 8am.

Homework 8

Due Wednesday Nov 18 at 22:00

Choose one of the three topics below which all deal with ethical aspects of recent technology, in particular the ethics of AI and its applications. You will discuss these in the seminar next week.

1. Ethical aspects of autonomous vehicles

2. Surveillance and facial recognition

3. Risks of AI development

All of these could be explored at length. However, this is a homework assignment, and the intent is mainly to prepare for the discussion. A concise summary of one page (or two) is sufficient.

 

1. Ethical aspects of autonomous vehicles

a. Do the Moral Machine Links to an external site. online experiment. Think carefully and record your answers. It may also be interesting to let some of your acquaintances of different ages and backgrounds try this and compare their answers to yours (not mandatory).

b. Read the article Awad, Edmond; Dsouza, Sohan; Kim, Richard; Schulz, Jonathan; Henrich, Joseph; Shariff, Azim; Bonnefon, Jean-François; Rahwan, Iyad (24 October 2018). "The Moral Machine experiment Links to an external site.". Nature. 563 (7729): 59–64.  (or here Links to an external site.). Reading the actual article (pages 59-64) is enough, you don't need to go through the supplementary material.

c. Reflect on your own answers (and those of others) in light of this article. Did your answers follow a clear ethical principle? Do they reflect any particular cultural bias?

d. Discuss the ethics of autonomous vehicles. How relevant are situations like those considered in the experiment to real autonomous vehicles? Are there other ethical considerations that should be taken into account (for example, in relation to the law)? Try to involve concepts in ethics from the lectures.

e. Summarize in a few points what you think are essential design principles for autonomous vehicles (concerning their interaction with people and other environmental factors).

Optional reading 1:

Edmond Awad, Inside the Moral Machine Links to an external site. (an informal discussion of the experiment and its impact by one of the authors)

Jean-François Bonnefon, Azim Shariff, Iyad Rahwan, The social dilemma of autonomous vehicles Links to an external site., Science 24 Jun 2016: Vol. 352, Issue 6293, pp. 1573-1576. or here Links to an external site.

Yochanan E. Bigman and Kurt Gray, Life and death decisions by autonomous vehicles Links to an external site., Nature 579,  E1–E2(2020).

Iagnemma, Karl. (2018) “Why we have the ethics of self-driving cars all wrong.” Links to an external site. World Economic Forum Annual Meeting. 

The Moral Machine is a variation of the trolley problem, which has been discussed by many philosophers interested in moral dilemmas, see Wikipedia   Links to an external site.or one of the original articles

Thomson, Judith J. (1976) “Killing, letting die, and the trolley problem. Links to an external site.” Monist 59: 204–17.

Very optional reading 1:

And more peripheral but related to the topic, some interesting articles on how automobiles conquered our streets 100 years ago. In the 1910s, streets in the US were still considered a public space where automobiles had to adapt to pedestrians rather than the other way around.

Stromberg, Joseph (2015). “The forgotten history of how automakers invented the crime ‘jaywalking. Links to an external site.” Vox.com. 

Peter D. Norton (2007), Street Rivals: Jaywalking and the Invention of the Motor Age Street Links to an external site.

New York streets 1914 and 1925:

Hester Street 1914Hester Street 1

 

2. Surveillance and facial recognition

The discussion of surveillance and privacy has roots going back a long time. The 18th century philosopher Jeremy Bentham mentioned in lecture 7 is actually the originator of a famous conceptual design for prisons (as well as hospitals, schools, and asylums) called the Panopticon, where cells are located in a circular building around a central guard tower (this has almost never been built in reality - the Presidio Modelo in Cuba from the 1920s and now a museum is an exception, see the image to the right below).

The PanopticonPresidio Modelo

The use of AI-based face recognition has become a topic of discussion recently, and the leading suppliers of face recognition technology (e.g., Amazon and IBM) have been forced to develop ethical guidelines, or restricted sales or even abandoned the market. There are different ethical aspects relating to surveillance and face recognition in particular.  The more recent developments have probably been driven by the difficulty in avoiding bias and discrimination,  but there are also ethical questions relating to privacy.

a. Read the article by Jay Boulamwini and Timnit Gehru:

Buolamwini, J., Gebru, T. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Links to an external site. Proceedings of Machine Learning Research 81:1–15, 2018 Conference on Fairness, Accountability, and Transparency.

and/or look at gendershades.org Links to an external site.

It is also useful to read some more general sources and news articles to learn about recent developments, such as:

Facial Recognition Software Facing Challenges And Seeing Some Success Links to an external site.

Three Big Tech Players Back Out of Facial Recognition Market Links to an external site.

Amazon bans police use of facial recognition technology for one year Links to an external site.

b. A more general ethical aspect of surveillance technologies is that of individual privacy. For relating issues around privacy to the ethics concepts in the lectures, it may be helpful to read section 5.3 of the article Bernhard Debatin, Ethics, Privacy, and Self-Restraint in Social Networking Links to an external site. in S. Trepte and L. Reinecke (eds.), Privacy Online, Springer, 2011.

c. Discuss the ethical aspects of surveillance and in particular surveillance using face recognition. Find some concrete examples where you consider the use of this technology ethical, and examples where you consider it to be unethical, and motivate your choice is some detail.  Try to make use of ethics concepts from the lectures.

d. Include at least a short reflection on the ethics of surveillance in general - do you consider privacy a basic human right?

Optional reading 2:

Nicole Martinez-Martin, What Are Important Ethical Implications of Using Facial Recognition Technology in Health Care? Links to an external site.

The Algorithmic Justice League Links to an external site.

Face recognition for bears Links to an external site.

Very optional reading and viewing 2: 

And as a more peripheral comment - other types of reflection relating to surveillance can be found in contemporary art, where it is an important theme, often addressed in more provocative and critical ways
(these works would hardly pass a scientific ethics review), see for example:

Kohei Yoshiyuki, The Park. Links to an external site. (or here Links to an external site.). Nighttime photographs, taken with infrared film and flash in Japan's Shinjuku, Yoyogi and Aoyama Parks during the 1970s, capturing, e.g.,  the illicit sexual encounters taking place there under the cloak of darkness.

Sophie Calle, Links to an external site. e.g., L'Hotêl, Links to an external site. (where she worked 3 weeks as a chamber maid at a hotel in Venice and spied on the guests, e.g., by photographing the momentarily unoccupied rooms),  The Shadow Links to an external site. (where she through her mother hired a private detective to spy on herself), and other works.

The Bureau of Inverse Technology (Natalie Jeremijenko et al), Suicide Box Links to an external site. (1996). Video recordings triggered by people jumping off Golden Gate Bridge.

Merry Alpern, Dirty Windows Links to an external site. (1994). Shots of prostitutes and their clients taken clandestinely through the bathroom window of a sex club on Wall Street.

 

3. Risks of AI development

The risks (and promises) of further AI development have been a growing topic of discussion during the last decade, in particular more speculative long term threats to humanity involving the development of autonomous entities with superhuman intelligence. The idea of a singularity, or intelligence explosion, where machines (physical or virtual) become capable of developing more intelligent machines in an accelerating process probably originates with I. J. Good in the 1960s, and was spread further in books by Ray Kurzweil.

One of the people involved in this discussion has been Nick Bostrom, a Swedish philosopher focusing on existential risks in general and the future of humanity, now a professor at the University of Oxford.

a. Start by reading (parts of) the article by him and Eliezer Yudkowsky:

Nick Bostrom and Eliezer Yudkowsky, The ethics of artificial intelligence Links to an external site., in The Cambridge Handbook of Artificial Intelligence, eds. Keith Frankish & William (Cambridge University Press, 2014): 316–334.

and if you like also the more popular article from the New Yorker Links to an external site.:

b. Discuss the potential threats and risks, e.g., to society, involved in the further development of AI technologies, and how researchers should act in view of these risks. Try to consider both short term risk, such as any potentially malicious (or in your view unethical) use of the technology, and more speculative long term threats. Also attempt to make reference to concepts in ethics from the lectures, both ethical theories and the guidelines for researchers (which you may or may not agree with).

Optional reading 3:

Stanford Encyclopedia of Philosophy, Ethics of Artificial Intelligence and Robotics Links to an external site.

Stuart Russell, Daniel Dewey, Max Tegmark: Research Priorities for Robust and Beneficial Artificial Intelligence Links to an external site.
Links to an external site.and the connected open letter Links to an external site.:

Nick Bostrom home page Links to an external site. with additional reading.

 

1605733200 11/18/2020 10:00pm
Please include a description
Additional Comments:
Rating max score to > pts
Please include a rating title

Rubric

Find Rubric
Please include a title
Find a Rubric
Title
You've already rated students with this rubric. Any major changes could affect their assessment results.
 
 
 
 
 
 
 
     
Can't change a rubric once you've started using it.  
Title
Criteria Ratings Pts
This criterion is linked to a Learning Outcome Description of criterion
threshold: 5 pts
Edit criterion description Delete criterion row
5 to >0 pts Full Marks blank
0 to >0 pts No Marks blank_2
This area will be used by the assessor to leave comments related to this criterion.
pts
  / 5 pts
--
Additional Comments
This criterion is linked to a Learning Outcome Description of criterion
threshold: 5 pts
Edit criterion description Delete criterion row
5 to >0 pts Full Marks blank
0 to >0 pts No Marks blank_2
This area will be used by the assessor to leave comments related to this criterion.
pts
  / 5 pts
--
Additional Comments
Total Points: 5 out of 5