Wheeler M (2020) Autonomy. In: Dubber MD, Pasquale F & Das S (eds.) Oxford Handbook of Ethics of AI. Oxford Handbooks. Oxford: Oxford University Press, pp. 343-358. https://global.oup.com/academic/product/the-oxford-handbook-of-ethics-of-ai-9780190067397?cc=gb&lang=en&
Unease regarding autonomous (self-governing) AI is most vividly expressed in the vision of an artificial super intelligence whose self-generated goals and interests diverge radically from those of humankind, and which thus places our well-being, and maybe even our survival, at risk. The first question addressed by this chapter, then, is this: what are the conditions that would need to be met by an intelligent machine, in order for that machine to exhibit the kind of autonomy that is operative in this dystopian scenario? However, there is arguably a more pressing concern regarding a different class of AI systems, those that are autonomous in only the milder sense that, in their domains of operation, we are ceding, or will cede, some significant degree of control to them. Systems of this kind include self-driving cars and autonomous weapons systems. The second question addressed by this chapter, then, is this: are these already-in-the-world autonomous AI systems a genuine cause for concern? A key issue here concerns the properties of so-called deep learning networks. The chapter ends by suggesting briefly that the two kinds of autonomy discussed are connected in an interesting way.
autonomous AI, autonomous weapons systems, control, deep learning, self-driving cars.