题名

AI可解釋性的法學意義及其實踐

并列篇名

Legal Significance of Explainable AI and Its Practice

DOI

10.6199/NTULJ.202311/SP_52.0001

作者

黃詩淳(Sieh-Chuen Huang)

关键词

可解釋性 ; 解釋權 ; 模型中心的解釋 ; 主體中心的解釋 ; 法律資料分析 ; 全域可解釋性 ; 區域可解釋性 ; Explainability/ Interpretability ; Right to Explanation ; Model-Centric Interpretation ; Subject-Centric Interpretation ; Legal Analytics ; Global Interpretability ; Local Interpretability

期刊名称

臺大法學論叢

卷期/出版年月

52卷S期(2023 / 11 / 01)

页次

931 - 972

内容语文

繁體中文;英文

中文摘要

近期資訊科學所謂的「AI的可解釋性(explainability)」有兩個內涵:其一是理解後說明的可解釋性(interpretability),包括主體中心的解釋與模型中心的解釋;其二是透明度(transparency),使用例如分解法或「模型不可知系統」(代理人模型等)之方法達成。另一方面,法學領域對AI的討論中,法規與司法裁判所稱的「要求解釋之權利」則是使用「explanation」一詞,但內涵為何、與資訊科學界的「可解釋性」是否相類,仍有相當爭論。本文認為,在需要較高程度的解釋時(例如公部門的自動化決策時),以透明度底下的方法所為之解釋,可能過度複雜難懂而對被影響之人沒有太大意義,也可能侵害模型製造者之營業秘密。法律毋寧應將重點放在interpretability底下的「主體中心」解釋與「模型中心」解釋二種方法,前者是提供主體關於與自己類似決定的人們的資訊,後者包括訓練資料的概述、模型種類、最重要因素及模型成效等,始符合GDPR第15條的「有意義資訊」。上述解釋不包括各因素的權重或原始程式碼。最後,針對未來可能出現的司法AI,本文以法律資料分析之相關研究為例,說明法律資料的處理及演算過程與可解釋性之關係,裨利法官與律師等使用者適當行使「要求解釋之權利」。

英文摘要

This article attempts to clarify whether or which aspects of the "explainable AI", a research hotspot in the data science community, can meet the "explainability" or "right to explanation" required by the legal domain. First, by analyzing recent research in the data science field regarding "explainable AI", the two connotations of "explainability" are found. One is the interpretation brought out by the researchers after understanding (interpretability). And the second is transparency, which is achieved by using methods such as decomposition to show "explanation producing system". Next, this article turns eyes to discussions related to "explanation" in legal domain. The word "explanation" is often used when regulations and judicial decisions require information related to algorithms. But it is more often seen that, instead of "explanation", adjacent concepts such as information access, disclosure, due process, etc. are used. However, there is still considerable debate on whether regulations such as GDPR can derive the "right to explanation" and what its connotation is. After comparing the idea of "explanation" in both data science and law, this paper argues that, when a higher level of explanation is required (for example, when reviewing public sector decisions), exogenous approaches such as surrogate models developed by the data scientists do not satisfy "meaningful information" defined by law and hence are not legally qualified explanations. The information provided by AI producers should at least include an overview of the training data, the type of model, the most important factors, and the effectiveness of the model. The above information consisting of "production system of interpretation" may comply with the "meaningful information" of Article 15 of the GDPR. On the other hand, the weight of each factor or the source code is not included in the information that should be legally disclosed. Finally, with regard to the judicial AI that may appear in the future, this article takes the relevant research on legal analytics as an example to illustrate the relationship between the processing and explainability, so as to benefit users such as judges and lawyers to properly exercise the "right to explanation".

主题分类 社會科學 > 法律學
参考文献
  1. 邵軒磊,黃詩淳(2020)。新住民相關親權酌定裁判書的文字探勘:對「平等」問題的法實證研究嘗試。臺大法學論叢,49(特刊),1267-1308。
    連結:
  2. 黃詩淳,邵軒磊(2019)。人工智慧與法律資料分析之方法與應用:以單獨親權酌定裁判的預測模型為例。臺大法學論叢,48(4),2023-2073。
    連結:
  3. 黃詩淳,邵軒磊(2018)。酌定子女親權之重要因素:以決策樹方法分析相關裁判。臺大法學論叢,47(1),299-344。
    連結:
  4. 鄭明政(2020)。從 State v. Loomis 案件看 AI 應用於司法審判上的若干問題。台日法政研究,4,165-178。
    連結:
  5. Adadi, A.,Berrada, M.(2018).Peeking inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI).IEEE Access,6,52138-52160.
  6. Angelino, E.,Larus-Stone, N.,Alabi, D.,Seltzer, M.,Rudin, C.(2018).Learning Certifiably Optimal Rule Lists for Categorical Data.Journal of Machine Learning Research,18,1-78.
  7. Arrieta, A. B.,Díaz-Rodríguez, N.,Ser, J. D.,Bennetot, A.,Tabik, S.,Barbado, A.,Garcia, S.,Gil-Lopez, S.,Molina, D.,Benjamins, R.,Chatila, R.,Herrera, F.(2020).Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.Information Fusion,58,82-115.
  8. Ashley, K. D.(2017).Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age.Cambridge University Press.
  9. Bastani, O.,Kim, C.,Bastani, H.(2019).,未出版
  10. Beaudouin, V.,Bloch, I.,Bounie, D.,Clémençon, S.,d'Alché-Buc, F.,Eagan, J.,Maxwell, W.,Mozharovskyi, P.,Parekh, J.(2020).,未出版
  11. Citron, D. K.,Pasquale, F.(2014).The Scored Society: Due Process for Automated Predictions.Washington Law Review,89(1),1-33.
  12. Coglianese, C.,Lehr, D.(2019).Transparency and Algorithmic Governance.Administrative Law Review,71,1-57.
  13. Deeks, A.(2019).The Judicial Demand for Explainable Artificial Intelligence.Columbia Law Review,119(7),1829-1850.
  14. Diakopoulos, N.(2016).Accountability in Algorithmic Decision Making.Communications of the ACM,59(2),56-62.
  15. Doshi-Velez, F.,Kim, B.(2017).,未出版
  16. Edwards, L.,Veale, M.(2017).Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking for.Duke Law and Technology Review,16,18-84.
  17. European Commission (2020, February 19). WHITE PAPER On Artificial Intelligence: A European Approach to Excellence and Trust. https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf
  18. Fong, R.,Vedaldi, A.(2017).Interpretable Explanations of Black Boxes by Meaningful Perturbation.2017 IEEE International Conference on Computer Vision (ICCV)
  19. Gelfert, A.(2010).Kant and the Enlightenment's Contribution to Social Epistemology.Episteme,7(1),79-99.
  20. Gilpin, L. H.,Bau, D.,Yuan, B. Z.,Bajwa, A.,Specter, M.,Kagal, L.(2019).,未出版
  21. Goodman, B.,Flaxman, S.(2017).European Union Regulations on Algorithmic Decision Making and a “Right to Explanation”.AI Magazine,38(3),50-57.
  22. Greenberg, M. (2021, July 7). Legal Interpretation. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2021 ed.). Stanford University. https://plato.stanford.edu/archives/fall2021/entries/legal-interpretation/
  23. Gunning, D. (2017, May 1). Explainable Artificial Intelligence (XAI). Darpa. https://sites.cc.gatech.edu/~alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI%20WS.pdf
  24. Gunning, D.,Aha, D. W.(2019).DARPA’s Explainable Artificial Intelligence (XAI) Program.AI Magazine,40(2),44-58.
  25. Gunning, D.,Vorm, E.,Wang, J. Y.,Turek, M.(2021).DARPA's Explainable AI (XAI) Program: A Retrospective.Applied AI Letters,2,1-11.
  26. Hacker, P.,Krestel, R.,Grundmann, S.,Naumann, F.(2020).Explainable AI under Contract and Tort Law: Legal Incentives and Technical Challenges.Artificial Intelligence and Law,28(4),415-439.
  27. Hase, P.,Bansal, M.(2020).,未出版
  28. Herman, B.(2017).,未出版
  29. High-Level Expert Group on Artificial Intelligence (2019, April 8). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  30. Holmes, W. O. (1899). The Theory of Legal Interpretation. Harvard Law Review, 12(6), 417-420. https://doi.org/10.2307/1321531
  31. Joseph, M. (2019, November 24). Interpretability: Cracking open the black box: Part III. Deep and Shallow. https://deep-and-shallow.com/2019/11/24/interpretability-cracking-open-the-black-box-part-iii/
  32. Knight, W. (2017, March 14). The U.S. Military Wants Its Autonomous Machines to Explain Themselves. MIT Technology Review. https://www.technologyreview.com/2017/03/14/243295/the-us-military-wants-its-autonomous-machines-to-explain-themselves/
  33. Kotsiantis, S. B.(2007).Supervised Machine Learning: A Review of Classification Techniques.Informatica,31,249-268.
  34. Kroll, J. A.,Huey, J.,Barocas, S.,Felten, E. W.,Reidenberg, J. R.,Robinson, D. G.,Yu, H.(2017).Accountable Algorithms.University of Pennsylvanis Law Review,165,633-706.
  35. Linardatos, P.,Papastefanopoulos, V.,Kotsiantis, S.(2020).Explainable AI: A Review of Machine Learning Interpretability Methods.Entropy,23(1),1-45.
  36. Lipton, Z. C.(2017).,未出版
  37. Miller, T.(2019).Explanation in Artificial Intelligence: Insights from the Social Sciences.Artificial Intelligence,267,1-38.
  38. Montavon, G.,Lapuschkin, S.,Binder, A.,Samek, W.,Müller, K. -R.(2017).Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition.Pattern Recognition,65,211-222.
  39. OECD Legal Instruments (2019, May 22). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449). https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
  40. Paige, M. A.,Amrein-Beardsley, A.(2020).“Houston, We Have a Lawsuit”: A Cautionary Tale for the Implementation of Value-Added Models for High-Stakes Employment Decisions.Educational Researcher,49(5),350-359.
  41. Pasquale, F.(2015).The Black Box Society: The Secret Algorithms That Control Money and Information.Harvard University Press.
  42. Pruett, W. A.,Hester, L. R.(2016).The Creation of Surrogate Models for Fast Estimation of Complex Model Outcomes.Plos One,11(6),e0156574.
  43. Ribeiro, M. T.,Singh, S.,Guestrin, C.(2016).,未出版
  44. Rudin, C.(2019).,未出版
  45. Saranya, A.,Sabhashini, R.(2023).A Systematic Review of Explainable Artificial Intelligence Models and Applications: Recent Developments and Future Trends.Decision Analytics Journal,7,1-14.
  46. Sarkar, D. (2018, May 25). The Importance of Human Interpretable Machine Learning. Medium. https://towardsdatascience.com/human-interpretable-machine-learning-part-1-the-needand-importance-of-model-interpretation-2ed758f5f476
  47. Selbst, A. D.,Barocas, S.(2018).The Intuitive Appeal of Explainable Machines.Fordham Law Review,87,1085-1139.
  48. Sharp, M. P. (1935). The Classical American Doctrine of “The Separation of Powers”. The University of Chicago Law Review, 2(3), 385-436. https://doi.org/10.2307/1596321
  49. Wachter, S.,Mittelstadt, B.,Russell, C.(2018).Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.Harvard Journal of Law & Technology,31(2),841-887.
  50. Wachter, S.,Mittestadt, B.,Floridi, L.(2017).Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation.International Data Privacy Law,7(2),76-99.
  51. Waltl, B.,Vogl, R.(2018).Explainable Artificial Intelligence: The New Frontier in Legal Informatics.Jusletter IT,4,1-10.
  52. Waterman, D. A.,Peterson, M. A.(1981).,Rand Corporation: The Institute for Civil Justice.
  53. 財團法人中華民國國家資訊基本建設產業發展協進會(2020)。,未出版
  54. 財團法人金融聯合徵信中心(2017).歐盟個人資料保護規則.
  55. 張永健,何漢葳,李宗憲(2017)。或重於泰山、或輕於鴻毛:地方法院車禍致死案件撫慰金之實證研究。政大法學評論,149,139-219。
  56. 黃詩淳,邵軒磊(2017)。運用機器學習預測法院裁判:法資訊學之實踐。月旦法學雜誌,270,86-96。
  57. 劉靜怡(2019)。淺談 GDPR 的國際衝擊及其可能因應之道。月旦法學雜誌,286,5-31。
  58. 鄭伊廷(2021)。試析「一般資料保護規則」下自動化決策的解釋權爭議。經貿法訊,279,13-23。
  59. 魏世和(2020)。歐盟數位發展之形塑:以人工智慧與資料政策為中心。經貿法訊,267,17-21。