题名

反思機器人的道德擬人主義

并列篇名

Reflection on the Moral Anthropomorphism of Robots

DOI

10.7015/JEAS.202006_50(2).0003

作者

何宗興(Tsung-Hsing Ho)

关键词

機器人 ; 人工智能 ; 道德主體 ; 道德擬人主義 ; 反應態度 ; robots ; artificial intelligence (AI) ; moral agency ; moral anthropomorphism ; reactive attitudes

期刊名称

歐美研究

卷期/出版年月

50卷2期(2020 / 06 / 01)

页次

179 - 205

内容语文

繁體中文

中文摘要

如果機器人的發展要能如科幻想像一般,在沒有人類監督下自動地工作,就必須確定機器人不會做出道德上錯誤的行為。根據行為主義式的道德主體觀,若就外顯行為來看,機器人在道德上的表現跟人類一般,機器人就可被視為道德主體。從這很自然地引伸出機器人的道德擬人主義:凡適用於人類的道德規則就適用於機器人。我反對道德擬人主義,藉由史特勞森對於人際關係與反應態度的洞見,並以家長主義行為為例,我論述由於機器人缺乏人格性,無法參與人際關係,因此在關於家長主義行為上,機器人應該比人類受到更嚴格的限制。

英文摘要

If robots are to function automatically, without human supervision, as depicted in sci-fi imagination, then we must ensure that robots not commit moral wrongs. According to the behaviourist conception of moral agency, if robots, assessed purely on the basis of behaviour, perform as morally as humans, they can be considered moral agents. This naturally leads to moral anthropomorphism: the position that whatever moral standards apply to humans apply equally to robots. I argue against moral anthropomorphism. In light of P. F. Strawson's insights into interpersonal relationships and reactive attitudes, and drawing on paternalist actions as examples, I argue that robots, being not persons, are unable to participate in interpersonal relationships, and therefore their paternalist actions towards humans ought to be less permissible than humans'.

主题分类 人文學 > 人文學綜合
社會科學 > 社會科學綜合
参考文献
  1. Allen, C.,Varner, G.,Zinser, J.(2000).Prolegomena to any future artificial moral agent.Journal of Experimental & Theoretical Artificial Intelligence,12(3),251-261.
  2. Brożek, B.,Janik, B.(2019).Can artificial intelligences be moral agents?.New Ideas in Psychology,54,101-106.
  3. Floridi, L.,Sanders, J. W.(2004).On the morality of artificial agents.Minds and Machines,14(3),349-379.
  4. Fossa, F.(2018).Artificial moral agents: Moral mentors or sensible tools?.Ethics and Information Technology,20(2),115-126.
  5. Grodzinsky, F. S.,Miller, K. W.,Wolf, M. J.(2008).The ethics of designing artificial agents.Ethics and Information Technology,10(2-3),115-121.
  6. Gunkel, D. J.(2012).The machine question: Critical perspectives on AI, robots, and ethics.Cambridge, MA:MIT Press.
  7. Himma, K. E.(2009).Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?.Ethics and Information Technology,11(1),19-29.
  8. IMDb. (2004). Quotes. Retrieved from https://www.imdb.com/title/tt0343818/quotes/qt0474786
  9. Laukyte, M.(2017).Artificial agents among us: Should we recognize them as agents proper?.Ethics and Information Technology,19(1),1-17.
  10. Snowdon, P., & Gomes, A. (2019). Peter Frederick Strawson. Retrieved from https://plato.stanford.edu/archives/spr2019/entries/strawson/
  11. Strawson, P. F.(1974).Freedom and resentment and other essays.London:Routledge.
  12. Torrance, S.(2012).Artificial agents and the expanding ethical circle.AI & Society,28(4),399-414.
  13. van Wynsberghe, A.,Robbins, S.(2019).Critiquing the reasons for making artificial moral agents.Science and Engineering Ethics,25,719-735.
  14. Wallach, W.,Allen, C.(2008).Moral machines: Teaching robots right from wrong.Oxford, UK:Oxford University Press.
  15. Watson, G.(2014).Peter Strawson on responsibility and sociality.Oxford studies in agency and responsibility,Oxford, UK:
  16. Weber, K.(2013).What is it like to encounter an autonomous artificial agent?.AI & Society,28(4),483-489.
被引用次数
  1. (2024)。論儒家的「道德主體」與AI作為「人工道德主體」之差異──以陽明的「心」觀念為例的反思。哲學與文化,51(11),5-22。