The rapid development of Artificial Intelligence (AI) systems has raised significant ethical concerns, particularly with the problem of transparency in their decision-making processes. As AI systems become increasingly integrated into various aspects of society, there is an urgent need to transform these ‘black box’ models into more transparent and understandable ‘glass-box’ systems. This paper explores the methods, challenges, and implications associated with supplementing transparency in AI systems in the promotion of ethical and trustworthy AI. We will examine the significance of transparency for stakeholders (i.e., users, developers, and policymakers), and the trade-offs between transparency and other objectives, e.g. accuracy and privacy. Recent US legislation prohibiting copyright claims on neural network-generated documents is used to illustrate the issues presented by the opaque nature of black-box AI models. A thorough literature review is carried out to investigate current approaches and tools for AI transparency and identify gaps and areas for future research. By moving from black box to glass box AI systems, we can ensure that AI technologies are not only powerful but also ethically sound and aligned with human values. This study adds to the ongoing debate about AI ethics and prepares the way for future research into the complex landscape of transparency, trust, and decision-making in AI systems. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

From Black Box to Glass Box: Advancing Transparency in Artificial Intelligence Systems for Ethical and Trustworthy AI

Franzoni, Valentina
2023

Abstract

The rapid development of Artificial Intelligence (AI) systems has raised significant ethical concerns, particularly with the problem of transparency in their decision-making processes. As AI systems become increasingly integrated into various aspects of society, there is an urgent need to transform these ‘black box’ models into more transparent and understandable ‘glass-box’ systems. This paper explores the methods, challenges, and implications associated with supplementing transparency in AI systems in the promotion of ethical and trustworthy AI. We will examine the significance of transparency for stakeholders (i.e., users, developers, and policymakers), and the trade-offs between transparency and other objectives, e.g. accuracy and privacy. Recent US legislation prohibiting copyright claims on neural network-generated documents is used to illustrate the issues presented by the opaque nature of black-box AI models. A thorough literature review is carried out to investigate current approaches and tools for AI transparency and identify gaps and areas for future research. By moving from black box to glass box AI systems, we can ensure that AI technologies are not only powerful but also ethically sound and aligned with human values. This study adds to the ongoing debate about AI ethics and prepares the way for future research into the complex landscape of transparency, trust, and decision-making in AI systems. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
2023
978-3-031-37113-4
978-3-031-37114-1
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/1556973
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact