题名

為何應該以人工智能強化倫理衝突的緊急決策?

并列篇名

Why AI Should Augment Urgent Decision-Making Involving Ethical Conflicts?

DOI

10.29843/JCCIS.202307_(45).0002

作者

甘偵蓉(Zhen-Rong Gan )

关键词

人工智能 ; 倫理決策 ; 緊急情況 ; 倫理兩難/衝突 ; 公共參與及民主 ; Artificial Intelligence (AI) ; Ethical decision-making ; Urgent situations ; Ethical dilemmas/coflicts ; Public participation and democracy

期刊名称

資訊社會研究

卷期/出版年月

45期(2023 / 07 / 01)

页次

19 - 50

内容语文

繁體中文;英文

中文摘要

針對緊急且涉及倫理兩難或衝突而人們難以做出良好決策的事項上,本文主張應該設計人工智能(artificial intelligence,簡稱AI)來改善甚至取代人們做倫理決策。這種AI倫理衝突決策系統,表面上是取代人類做出決策,但實質上是人類借助AI做出較佳的倫理決策,這種決策是從資料驅動到AI驅動。本文透過檢討那些反對以AI做倫理決策的方法論與本體論等兩類批評,並從檢討中指出這兩類批評,有哪部分值得AI開發者警惕,有哪部分則是對於這類倫理決策系統的誤解。本文並進一步指出,這種借助AI來改善涉及公共事務的緊急倫理衝突決策,若能從AI系統設計到部署的每個階段,皆納入決策利益相關人員甚至公眾的參與,便是辛諾特-阿姆斯壯與斯科堡於2021年所主張的「人工改良民主」。

英文摘要

This article argues that artificial intelligence (AI) should be designed to improve or even replace human decision-making in urgent situations involving ethical dilemmas or conflicts, which are well-known to be difficult for people to make good decisions. While the AI ethical conflict decision-making system may seem to supplant human decision-making, in reality, humans utilize AI to make better ethical decisions, transitioning from data-driven to AI-driven decision-making. By examining two types of criticisms-methodology and ontology-against using AI in ethical decision-making, this article points out which parts AI developers should be aware of and which are misunderstandings of these systems. Furthermore, the article suggests that involving decision-makers and even the public in every stage of AI system design and deployment can be viewed as an "Artificial Improved Democracy" proposed by Sinnott-Armstrong and Skorburg in 2021, which could enhance the development of AI ethical decision-making systems involving public affairs.

主题分类 基礎與應用科學 > 資訊科學
社會科學 > 社會學
社會科學 > 傳播學
参考文献
  1. 甘偵蓉,許漢(2020)。AI 倫理的兩面性初探:人類研發 AI 的倫理道德與 AI 的倫理規範。《歐美研究》季刊,50(2),231-292。
    連結:
  2. 陳瑞麟(2020)。科技風險與倫理評價:以科技風險倫理來評估台灣基改生物與人工智能的社會爭議。科技、醫療與社會,30,13-65。
    連結:
  3. 劉育成(2020)。如何成為「人」:缺陷及其經驗作為對人工智能研究之啟發—以自動駕駛技術為例。資訊社會研究,38,93-126。
    連結:
  4. 劉湘瑤,張震興,張瓅勻,趙恩,李思賢(2021)。人工智能倫理的挑戰與反思:文獻分析。資訊社會研究,41,27-64。
    連結:
  5. 中央社(2021.04.30)。〈台中嘉陽高中女學生校門前車禍喪命肇事司機赴靈堂道歉〉。取自 https://www.cna.com.tw/news/asoc/202104300259.aspx 檢索日期 2023年 7 月 21 日
  6. Awad, E.,Dsouza, S.,Kim, R.,Schulz, J.,Henrich, J.,Shariff, A.,Bonnefon, J.-F.,Rahwan, I.(2018).The moral machine experiment.Nature,563,59-64.
  7. Bigman, Y. E.,Gray, K.(2020).Life and death decisions of autonomous vehicles.Nature,579(7797),E1-E2.
  8. Braun, E.,Broestl, N.,Chou, D.,Vandersluis, R.(2021).The challenges of using machine learning for organ allocation. Reply to Sinnott-Armstrong and Skorburg.Journal of Practical Ethics
  9. Cervantes, J. A.,López, S.,Rodríguez, L. F.,Cervantes, S.,Cervantes, F.,Ramos, F.(2020).Artificial moral agents: A survey of the current status.Science and Engineering Ethics,26(2),501-532.
  10. Colson, E. (2019). What AI-driven decision making looks like. Harvard Business Review. Retrieved July 21, 2023, from https://hbr.org/2019/07/what-ai-driven-decision-making-looks-like
  11. De Sousa, R.(1979).The rationality of emotions.Dialogue: Canadian Philosophical Review/Revue Canadienne de Philosophie,18(1),41-63.
  12. Dixon, B. (2020/3/11). The “moral machine” is bad news for AI ethics. Mind Matter News. Retrieved July 21, 2023, from https://mindmatters.ai/2020/03/the-moral-machine-is-bad-news-for-ai-ethics/#
  13. Fjeld, J.,Achten, N.,Hilligoss, H.,Nagy, A.,Srikumar, M.(2020).,Berkman Klein Center for Internet & Society.
  14. Furey, H.,Hill, S.(2021).MIT’s moral machine project is a psychological roadblock to self-driving cars.AI and Ethics,2(1),151-155.
  15. Gebru, T.(2020).Race and gender.The Oxford Handbook of Ethics of AI,Oxford:
  16. Iagnemma, Karl. (2018) Why we have the ethics of self-driving cars all wrong. World Economic Forum Annual Meeting. Retrieved July 21, 2023, from https://medium.com/world-economic-forum/why-we-have-the-ethics-of-self-driving-cars-all-wrong-92566f282733
  17. Jaques, A. E.(2019).Why the moral machine is a monster?.University of Miami School of Law,10,1-10.
  18. Kalra, N.,Groves, D. G.(2017).The enemy of good: estimating the cost of waiting for nearly perfect automated vehicles.RAND Corporation.
  19. Kauppinen, A. (2002). Moral Sentimentalism. In Edward N. Zalta (Ed.), Stanford Encyclopedia of Philosophy. Retrieved July 21, 2023, from https://plato.stanford.edu/archives/win2017/entries/experimental-philosophy/
  20. Knobe, J. & Nichols, S. (2017). “Experimental Philosophy,” The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). Retrieved July 21, 2023, from https://plato.stanford.edu/archives/win2017/entries/experimental-philosophy/
  21. McIntyre, A. (2019). The Doctrine of Double Effect. In Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. Retrieved July 21, 2023, from https://plato.stanford.edu/archives/spr2019/entries/double-effect/
  22. Mitchell, M.,Wu, S.,Zaldivar, A.,Barnes, P.,Vasserman, L.,Hutchinson, B.,Spitzer, E.,Raji, ID.,Gebru, T.(2019).Model cards for codel reporting.Proceedings of The Conference on Fairness, Accountability, and Transparency (FAT* '19),New York, NY, USA:
  23. Richardson, Henry S. (2018). Moral Reasoning. The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.). Retrieved July 21, 2023, from https://plato.stanford.edu/archives/fall2018/entries/reasoning-moral/
  24. Sinnott-Armstrong, W.,Skorburg, J. A.(2021).How AI can AID bioethics.Journal of Practical Ethics,9(1)
  25. Vallor, S.,Ager, S.,Luan, R.(2021).The digital basanos: AI and the virtue of and violence of truth-telling.2021 IEEE International Symposium on Technology and Society (ISTAS)
  26. van Wynsberghe, A.,Robbins, S.(2019).Critiquing the reasons for making artificial moral agents.Science and engineering ethics,25,719-735.
  27. Véliz, C.(2021).Moral zombies: why algorithms are not moral agents.AI & SOCIETY,36(2),487-497.
  28. 王道維(2021)。〈文字標註與偏見處理〉,《人文社會 AI 導論線上課程第九集》,清華大學。取自 https://nthuhssai.site.nthu.edu.tw/p/406-1535-212970,r9286.php
  29. 祖旭華(2020)。〈自駕車道德難題與問卷調查的研究方法〉,台灣人工智慧行動網。取自 https://ai.iias.sinica.edu.tw/self-driving-car-survey/ 檢索日期 2023 年 7 月 21 日
  30. 許漢(2020)。〈正義〉,《華文哲學百科》(2021 版本),王一奇(編)。取自 http://mephilosophy.ccu.edu.tw/entry.php?entry_name 檢索日期 2023 年 7 月 21 日
被引用次数
  1. (2024)。AI開發過程的倫理權衡:自駕車決策案例研究。歐美研究,54(1),1-67。