参考文献
|
-
邵軒磊,黃詩淳(2020)。新住民相關親權酌定裁判書的文字探勘:對「平等」問題的法實證研究嘗試。臺大法學論叢,49(特刊),1267-1308。
連結:
-
黃詩淳,邵軒磊(2019)。人工智慧與法律資料分析之方法與應用:以單獨親權酌定裁判的預測模型為例。臺大法學論叢,48(4),2023-2073。
連結:
-
黃詩淳,邵軒磊(2018)。酌定子女親權之重要因素:以決策樹方法分析相關裁判。臺大法學論叢,47(1),299-344。
連結:
-
鄭明政(2020)。從 State v. Loomis 案件看 AI 應用於司法審判上的若干問題。台日法政研究,4,165-178。
連結:
-
Adadi, A.,Berrada, M.(2018).Peeking inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI).IEEE Access,6,52138-52160.
-
Angelino, E.,Larus-Stone, N.,Alabi, D.,Seltzer, M.,Rudin, C.(2018).Learning Certifiably Optimal Rule Lists for Categorical Data.Journal of Machine Learning Research,18,1-78.
-
Arrieta, A. B.,Díaz-Rodríguez, N.,Ser, J. D.,Bennetot, A.,Tabik, S.,Barbado, A.,Garcia, S.,Gil-Lopez, S.,Molina, D.,Benjamins, R.,Chatila, R.,Herrera, F.(2020).Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.Information Fusion,58,82-115.
-
Ashley, K. D.(2017).Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age.Cambridge University Press.
-
Bastani, O.,Kim, C.,Bastani, H.(2019).,未出版
-
Beaudouin, V.,Bloch, I.,Bounie, D.,Clémençon, S.,d'Alché-Buc, F.,Eagan, J.,Maxwell, W.,Mozharovskyi, P.,Parekh, J.(2020).,未出版
-
Citron, D. K.,Pasquale, F.(2014).The Scored Society: Due Process for Automated Predictions.Washington Law Review,89(1),1-33.
-
Coglianese, C.,Lehr, D.(2019).Transparency and Algorithmic Governance.Administrative Law Review,71,1-57.
-
Deeks, A.(2019).The Judicial Demand for Explainable Artificial Intelligence.Columbia Law Review,119(7),1829-1850.
-
Diakopoulos, N.(2016).Accountability in Algorithmic Decision Making.Communications of the ACM,59(2),56-62.
-
Doshi-Velez, F.,Kim, B.(2017).,未出版
-
Edwards, L.,Veale, M.(2017).Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking for.Duke Law and Technology Review,16,18-84.
-
European Commission (2020, February 19). WHITE PAPER On Artificial Intelligence: A European Approach to Excellence and Trust. https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf
-
Fong, R.,Vedaldi, A.(2017).Interpretable Explanations of Black Boxes by Meaningful Perturbation.2017 IEEE International Conference on Computer Vision (ICCV)
-
Gelfert, A.(2010).Kant and the Enlightenment's Contribution to Social Epistemology.Episteme,7(1),79-99.
-
Gilpin, L. H.,Bau, D.,Yuan, B. Z.,Bajwa, A.,Specter, M.,Kagal, L.(2019).,未出版
-
Goodman, B.,Flaxman, S.(2017).European Union Regulations on Algorithmic Decision Making and a “Right to Explanation”.AI Magazine,38(3),50-57.
-
Greenberg, M. (2021, July 7). Legal Interpretation. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2021 ed.). Stanford University. https://plato.stanford.edu/archives/fall2021/entries/legal-interpretation/
-
Gunning, D. (2017, May 1). Explainable Artificial Intelligence (XAI). Darpa. https://sites.cc.gatech.edu/~alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI%20WS.pdf
-
Gunning, D.,Aha, D. W.(2019).DARPA’s Explainable Artificial Intelligence (XAI) Program.AI Magazine,40(2),44-58.
-
Gunning, D.,Vorm, E.,Wang, J. Y.,Turek, M.(2021).DARPA's Explainable AI (XAI) Program: A Retrospective.Applied AI Letters,2,1-11.
-
Hacker, P.,Krestel, R.,Grundmann, S.,Naumann, F.(2020).Explainable AI under Contract and Tort Law: Legal Incentives and Technical Challenges.Artificial Intelligence and Law,28(4),415-439.
-
Hase, P.,Bansal, M.(2020).,未出版
-
Herman, B.(2017).,未出版
-
High-Level Expert Group on Artificial Intelligence (2019, April 8). Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
-
Holmes, W. O. (1899). The Theory of Legal Interpretation. Harvard Law Review, 12(6), 417-420. https://doi.org/10.2307/1321531
-
Joseph, M. (2019, November 24). Interpretability: Cracking open the black box: Part III. Deep and Shallow. https://deep-and-shallow.com/2019/11/24/interpretability-cracking-open-the-black-box-part-iii/
-
Knight, W. (2017, March 14). The U.S. Military Wants Its Autonomous Machines to Explain Themselves. MIT Technology Review. https://www.technologyreview.com/2017/03/14/243295/the-us-military-wants-its-autonomous-machines-to-explain-themselves/
-
Kotsiantis, S. B.(2007).Supervised Machine Learning: A Review of Classification Techniques.Informatica,31,249-268.
-
Kroll, J. A.,Huey, J.,Barocas, S.,Felten, E. W.,Reidenberg, J. R.,Robinson, D. G.,Yu, H.(2017).Accountable Algorithms.University of Pennsylvanis Law Review,165,633-706.
-
Linardatos, P.,Papastefanopoulos, V.,Kotsiantis, S.(2020).Explainable AI: A Review of Machine Learning Interpretability Methods.Entropy,23(1),1-45.
-
Lipton, Z. C.(2017).,未出版
-
Miller, T.(2019).Explanation in Artificial Intelligence: Insights from the Social Sciences.Artificial Intelligence,267,1-38.
-
Montavon, G.,Lapuschkin, S.,Binder, A.,Samek, W.,Müller, K. -R.(2017).Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition.Pattern Recognition,65,211-222.
-
OECD Legal Instruments (2019, May 22). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449). https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
-
Paige, M. A.,Amrein-Beardsley, A.(2020).“Houston, We Have a Lawsuit”: A Cautionary Tale for the Implementation of Value-Added Models for High-Stakes Employment Decisions.Educational Researcher,49(5),350-359.
-
Pasquale, F.(2015).The Black Box Society: The Secret Algorithms That Control Money and Information.Harvard University Press.
-
Pruett, W. A.,Hester, L. R.(2016).The Creation of Surrogate Models for Fast Estimation of Complex Model Outcomes.Plos One,11(6),e0156574.
-
Ribeiro, M. T.,Singh, S.,Guestrin, C.(2016).,未出版
-
Rudin, C.(2019).,未出版
-
Saranya, A.,Sabhashini, R.(2023).A Systematic Review of Explainable Artificial Intelligence Models and Applications: Recent Developments and Future Trends.Decision Analytics Journal,7,1-14.
-
Sarkar, D. (2018, May 25). The Importance of Human Interpretable Machine Learning. Medium. https://towardsdatascience.com/human-interpretable-machine-learning-part-1-the-needand-importance-of-model-interpretation-2ed758f5f476
-
Selbst, A. D.,Barocas, S.(2018).The Intuitive Appeal of Explainable Machines.Fordham Law Review,87,1085-1139.
-
Sharp, M. P. (1935). The Classical American Doctrine of “The Separation of Powers”. The University of Chicago Law Review, 2(3), 385-436. https://doi.org/10.2307/1596321
-
Wachter, S.,Mittelstadt, B.,Russell, C.(2018).Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.Harvard Journal of Law & Technology,31(2),841-887.
-
Wachter, S.,Mittestadt, B.,Floridi, L.(2017).Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation.International Data Privacy Law,7(2),76-99.
-
Waltl, B.,Vogl, R.(2018).Explainable Artificial Intelligence: The New Frontier in Legal Informatics.Jusletter IT,4,1-10.
-
Waterman, D. A.,Peterson, M. A.(1981).,Rand Corporation: The Institute for Civil Justice.
-
財團法人中華民國國家資訊基本建設產業發展協進會(2020)。,未出版
-
財團法人金融聯合徵信中心(2017).歐盟個人資料保護規則.
-
張永健,何漢葳,李宗憲(2017)。或重於泰山、或輕於鴻毛:地方法院車禍致死案件撫慰金之實證研究。政大法學評論,149,139-219。
-
黃詩淳,邵軒磊(2017)。運用機器學習預測法院裁判:法資訊學之實踐。月旦法學雜誌,270,86-96。
-
劉靜怡(2019)。淺談 GDPR 的國際衝擊及其可能因應之道。月旦法學雜誌,286,5-31。
-
鄭伊廷(2021)。試析「一般資料保護規則」下自動化決策的解釋權爭議。經貿法訊,279,13-23。
-
魏世和(2020)。歐盟數位發展之形塑:以人工智慧與資料政策為中心。經貿法訊,267,17-21。
|