Web of Science: 2 cites, Scopus: 1 cites, Google Scholar: cites,
Justificatory explanations in machine learning : for increased transparency through documenting how key concepts drive and underpin design and engineering decisions
Casacuberta, David 1967- (Universitat Autònoma de Barcelona. Departament de Filosofia)
Guersenzvaig, Ariel (Universitat de Vic - Universitat Central de Catalunya. Elisava, Facultat de Disseny i Enginyeria)
Moyano, Cristian (Consejo Superior de Investigaciones Científicas (Espanya). Instituto de Filosofía)

Data: 2022
Resum: Given the pervasiveness of AI systems and their potential negative effects on people's lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more "explainable" by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, or data-related issues, we focus on the very conceptual underpinnings of the design decisions made by developers and other stakeholders during the lifecycle of a machine learning project. For instance, the design and development of an app to track snoring to detect possible health risks presuppose some picture or another of "health", which is a key notion that conceptually underpins the project. We take it as a premise that these key concepts are necessarily present during design and development, albeit perhaps tacitly. We argue that by providing "justificatory explanations" about how the team understands the relevant key concepts behind its design decisions, interested parties could gain valuable insights and make better sense of the workings and outcomes of systems. Using the concept of "health", we illustrate how a particular understanding of it might influence decisions during the design and development stages of a machine learning project, and how making this explicit by incorporating it into ex-post explanations might increase the explanatory and justificatory power of these explanations. We posit that a greater conceptual awareness of the key concepts that underpin design and development decisions may be beneficial to any attempt to develop explainability methods. We recommend that "justificatory explanations" are provided as technical documentation. These are declarative statements that contain at its simplest: (1) a high-level account of the understanding of the relevant key concepts a team possess related to a project's main domain, (2) how these understandings drive decision-making during the life-cycle stages, and (3) it gives reasons (which could be implicit in the account) that the person or persons doing the explanation consider to have plausible justificatory power for the decisions that were made during the project.
Ajuts: Ministerio de Economía, Industria y Competitividad FFI2017-85711-P
Agència de Gestió d'Ajuts Universitaris i de Recerca 2017-SGR-1773
Agència de Gestió d'Ajuts Universitaris i de Recerca 2017-SGR-568
Nota: Altres ajuts: acords transformatius de la UAB
Drets: Aquest document està subjecte a una llicència d'ús Creative Commons. Es permet la reproducció total o parcial, la distribució, la comunicació pública de l'obra i la creació d'obres derivades, fins i tot amb finalitats comercials, sempre i quan es reconegui l'autoria de l'obra original. Creative Commons
Llengua: Anglès
Document: Article ; recerca ; Versió publicada
Matèria: Explainability in machine learning ; Justifying reasons ; Decision-making ; Justificatory explanations ; Health
Publicat a: AI & society, march 2022, p. 1-15, ISSN 1435-5655

DOI: 10.1007/s00146-022-01389-z
PMID: 35370366


15 p, 715.2 KB

El registre apareix a les col·leccions:
Articles > Articles de recerca
Articles > Articles publicats

 Registre creat el 2022-04-26, darrera modificació el 2024-03-07



   Favorit i Compartir