| Literature DB >> 34901048 |
Guglielmo Trovato1, Matteo Russo1.
Abstract
Entities:
Keywords: artificial intelligence; bullying in international settings; deep learning-artificial neural network (DL-ANN); ethics; lung imaging
Year: 2021 PMID: 34901048 PMCID: PMC8655241 DOI: 10.3389/fmed.2021.706794
Source DB: PubMed Journal: Front Med (Lausanne) ISSN: 2296-858X
Requirements for a suitable use of deep learning in Lung imaging: what the lung ultrasound still lacks. Key messages.
|
|
|
|
|---|---|---|
| Due to the over-parameterized nature of Deep neural network (DNN) models, they are usually treated as black-boxes and their predictions never questioned. Especially in medicine, where wrong predictions may have disastrous consequences, interpretability of results is of the uttermost importance. This may mean that the model output is not only the bare prediction but also what feature and quality of the input data caused such prediction. This may ensure that a medical operator can always interpret, explain and challenge DNNs predictions | For especially high-dimensional inputs and extremely over-parameterized network architectures, explainability in the form of interpretability may be computationally hard to achieve and, thus, impractical. Nonetheless, there exist already statistical frameworks that can be juxtaposed to the DNN and give out guarantees on the output distribution | When Deep Learning systems are sold to hospitals, they have to come with a certification of the output prediction in order to ensure not only statistical consistency but also serve as a basis for legal compliance |