On the Opacity of Deep Neural Networks

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 166 KB, PDF-dokument

Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to what extent the two kinds of opacity can be mitigated by explainability methods.
OriginalsprogEngelsk
TidsskriftCanadian Journal of Philosophy
Vol/bind53
Udgave nummer3
Sider (fra-til)224–239
ISSN0045-5091
DOI
StatusUdgivet - 2023

Bibliografisk note

Publisher Copyright:
© The Author(s), 2024. Published by Cambridge University Press on behalf of The Canadian Journal of Philosophy Inc.

ID: 389904615