Homework 8 HT 2022
- Due 22 Nov 2022 by 17:00
- Points 1
- Submitting a file upload
- Available 1 Nov 2022 at 8:00 - 31 Jan 2023 at 17:00
Homework 8 - Ethics
Due Nov 22 17.00
Choose one of the three topics below which all deal with ethical aspects of recent technology, in particular the ethics of AI and its applications. You will discuss these in the seminar next week.
1. Ethical aspects of autonomous vehicles
2. Surveillance and facial recognition
3. Lethal autonomous weapons
Try to explore these from ethical principles rather than just a personal opinion in the written assignments below.
1. Ethical aspects of autonomous vehicles
a. Do the Moral Machine Links to an external site. online experiment. Think carefully and record your answers. It may also be interesting to let some of your acquaintances of different ages and backgrounds try this and compare their answers to yours (not mandatory).
b. Read the article Awad, Edmond; Dsouza, Sohan; Kim, Richard; Schulz, Jonathan; Henrich, Joseph; Shariff, Azim; Bonnefon, Jean-François; Rahwan, Iyad (24 October 2018). "The Moral Machine experiment Links to an external site.". Nature. 563 (7729): 59–64. (or here Links to an external site.). Reading the actual article (pages 59-64) is enough, you don't need to go through the supplementary material.
c. Reflect on your own answers (and those of others) in light of this article. Did your answers follow a clear ethical principle? Do they reflect any particular cultural bias?
d. Discuss the ethics of autonomous vehicles. How relevant are situations like those considered in the experiment to real autonomous vehicles? Are there other ethical considerations that should be taken into account (for example, in relation to the law)? Try to involve concepts in ethics from the lectures.
e. Summarize in a few points what you think are essential design principles for autonomous vehicles (concerning their interaction with people and other environmental factors).
Optional reading 1:
Edmond Awad, Inside the Moral Machine Links to an external site. (an informal discussion of the experiment and its impact by one of the authors)
Jean-François Bonnefon, Azim Shariff, Iyad Rahwan, The social dilemma of autonomous vehicles Links to an external site., Science 24 Jun 2016: Vol. 352, Issue 6293, pp. 1573-1576. or here Links to an external site.
Yochanan E. Bigman and Kurt Gray, Life and death decisions by autonomous vehicles Links to an external site., Nature 579, E1–E2(2020).
Iagnemma, Karl. (2018) “Why we have the ethics of self-driving cars all wrong.” Links to an external site. World Economic Forum Annual Meeting.
The Moral Machine is a variation of the trolley problem, which has been discussed by many philosophers interested in moral dilemmas, see Wikipedia Links to an external site.or one of the original articles
Thomson, Judith J. (1976) “Killing, letting die, and the trolley problem. Links to an external site.” Monist 59: 204–17.
Additional related reading :
And more peripheral but related to the topic, some interesting articles on how automobiles conquered our streets 100 years ago. In the 1910s, streets in the US were still considered a public space where automobiles had to adapt to pedestrians rather than the other way around.
Stromberg, Joseph (2015). “The forgotten history of how automakers invented the crime ‘jaywalking. Links to an external site.” Vox.com.
Peter D. Norton (2007), Street Rivals: Jaywalking and the Invention of the Motor Age Street Links to an external site.
New York streets 1914 and 1925:
2. Surveillance and facial recognition
The discussion of surveillance and privacy has roots going back a long time. The 18th century philosopher Jeremy Bentham mentioned in the lectures is actually the originator of a famous conceptual design for prisons (as well as hospitals, schools, and asylums) called the Panopticon, where cells are located in a circular building around a central guard tower (this has almost never been built in reality - the Presidio Modelo in Cuba from the 1920s and now a museum is an exception, see the image to the right below).
The use of AI-based face recognition has become a topic of discussion recently, and the leading suppliers of face recognition technology (e.g., Amazon and IBM) have been forced to develop ethical guidelines, or restricted sales or even abandoned the market. There are different ethical aspects relating to surveillance and face recognition in particular. The more recent developments have probably been driven by the difficulty in avoiding bias and discrimination, but there are also ethical questions relating to privacy.
a. Read the article by Jay Boulamwini and Timnit Gehru:
Buolamwini, J., Gebru, T. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Links to an external site. Proceedings of Machine Learning Research 81:1–15, 2018 Conference on Fairness, Accountability, and Transparency.
and/or look at gendershades.org Links to an external site.
It is also useful to read some more general sources and news articles to learn about recent developments, such as:
How facial recognition is identifying the dead in Ukraine Links to an external site.
c. Discuss the ethical aspects of surveillance and in particular surveillance using face recognition. Find some concrete examples where you consider the use of this technology ethical, and examples where you consider it to be unethical, and motivate your choice is some detail. Try to make use of ethics concepts from the lectures.
d. Include at least a short reflection on the ethics of surveillance in general - do you consider privacy a basic human right?
Optional reading:
Nicole Martinez-Martin, What Are Important Ethical Implications of Using Facial Recognition Technology in Health Care? Links to an external site.
The Algorithmic Justice League Links to an external site.
Face recognition for bears Links to an external site.
More voluntary reading:
Other types of reflection relating to surveillance can be found in contemporary art, where it is an important theme, often addressed in more provocative and critical ways (these works would hardly pass a scientific ethics review), see for example:
Kohei Yoshiyuki, The Park. Links to an external site. (or here Links to an external site.). Nighttime photographs, taken with infrared film and flash in Japan's Shinjuku, Yoyogi and Aoyama Parks during the 1970s, capturing, e.g., the illicit encounters taking place there under the cloak of darkness.
Sophie Calle, Links to an external site. e.g., L'Hotêl, Links to an external site. (where she worked 3 weeks as a chamber maid at a hotel in Venice and spied on the guests, e.g., by photographing the momentarily unoccupied rooms), The Shadow Links to an external site. (where she through her mother hired a private detective to spy on herself), and other works.
The Bureau of Inverse Technology (Natalie Jeremijenko et al), Suicide Box Links to an external site. (1996). Video recordings triggered by people jumping off Golden Gate Bridge.
Merry Alpern, Dirty Windows Links to an external site. (1994). Shots of prostitutes and their clients taken clandestinely through the bathroom window of a club on Wall Street.
3. Intelligent autonomous weapons
One of the more controversial applications of AI algorithms is to new more intelligent autonomous weapons, an area where a considerable amount of research and development is going on in the military industry.
Autonomous weapons are not a new concept. Any lethal device that takes decision autonomously based on sensor information, even if much simpler, could be said to fall in this category. Land mines and naval mines are the most obvious examples. Left-over land mines from different conflicts are a major problem and are estimated to kill up to 20000 people yearly and injure many more. In Swedish waters, it is estimated that there are still around 40000 naval mines left from the two world wars. But AI opens up many new possibilities, such as making weapons more specific by exploiting technologies such as facial recognition.
Numerous researchers and others have argued strongly against the development of lethal autonomous agents. Some examples are:
The open letter Links to an external site. quoted in Toby Walsh, Open letter: we must stop killer robots before they are built Links to an external site.
The Youtube video Slaughterbots Links to an external site.
Stuart Russell, Lethal Autonomous Weapons Exist; They Must Be Banned, Links to an external site.IEEE Spectrum, 16 June, 2021
Noel Sharkey, The evitability of autonomous robot warfare Links to an external site.
International Red Cross Position Paper (2021): Artificial intelligence and machine learning in armed conflict: A human-centered approach Links to an external site.
Stuart Russell, Anthony Aguirre, Ariel Conn, Max Tegmark, Why You Should Fear “Slaughterbots”—A Response Links to an external site.
But there are also more moderate opinions, such as
Paul Scharre, Why You Shouldn’t Fear “Slaughterbots”, IEEE Spectrum
Links to an external site.
The ethics of war in general can also be discussed. One possibility is to take a strict pacifist
Links to an external site. stance,
but there is also a development of theories of just war
Links to an external site. (jus ad bellum, and jus in bello) that some believe provide
an ethical justification under some particular conditions.
1. Start by reading some of the debate on intelligent autonomous weapons and the ethics of warfare
2. Should autonomous lethal weapons be banned, or subject to other forms of restrictions?
First formulate at least three arguments with counterarguments for and against, and then explain which
you consider most convincing and why.
3. And as a follow-up, reflect on the extent of moral responsibility of an engineer or
computer scientist for how the result of one's work is used. Consider a concrete case, for example:
One of your classmates asks you for moral guidance, since she or he has received a very tempting
offer to carry out a master's thesis applying deep learning to the detection of low probability of
intercept radar, which will be carried out within a Swedish company that mainly sells its products
to the military sector (and is believed to follow Swedish law on export of military material). Based on
your ability to reason about ethical issues, would you advise your classmate for or against accepting
the offer?
Or if you prefer, you can choose a similar situation of your own involving a moral dilemma relating
to research and development of potentially very harmful technology and reason about it in the same way.
Peter Weiss, The machines attack the people (painted in Berlin, 1935).
Further reading on the ethics of AI in general:
Nick Bostrom and Eliezer Yudkowsky, The ethics of artificial intelligence Links to an external site., in The Cambridge Handbook of Artificial Intelligence, eds. Keith Frankish & William (Cambridge University Press, 2014): 316–334.
and if you like also the more popular article from the New Yorker Links to an external site.:
Stanford Encyclopedia of Philosophy, Ethics of Artificial Intelligence and Robotics Links to an external site.
Nick Bostrom home page Links to an external site. with additional reading.
Handing in your solution
Please save your solution as a pdf file and hand it both in Canvas (for grading) and Peergrade (for peer review).
Peer grading
You will be asked to review the homework of two other students in Peergrade. Your solution will also be
reviewed in this way. The peer review is a mandatory part of the course.
Feedback from your TA
Your seminar leader will grade your submission and report the result in Canvas. This may happen before
the associated seminar, but if your seminar leader is busy it will
Complete means you have passed the assignment.
Incomplete means you have to hand in a revised version.
Fail means that you will have to submit a new version and attend the make-up seminar.
The Fail grade will only be applied in exceptional circumstances such as plagiarized work.