| Literature DB >> 33178959 |
Keshav Shree Mudgal1, Neelanjan Das2.
Abstract
Artificial intelligence (AI) is rapidly transforming healthcare-with radiology at the pioneering forefront. To be trustfully adopted, AI needs to be lawful, ethical and robust. This article covers the different aspects of a safe and sustainable deployment of AI in radiology during: training, integration and regulation. For training, data must be appropriately valued, and deals with AI companies must be centralized. Companies must clearly define anonymization and consent, and patients must be well-informed about their data usage. Data fed into algorithms must be made AI-ready by refining, purification, digitization and centralization. Finally, data must represent various demographics. AI needs to be safely integrated with radiologists-in-the-loop: guiding forming concepts of AI solutions and supervising training and feedback. To be well-regulated, AI systems must be approved by a health authority and agreements must be made upon liability for errors, roles of supervised and unsupervised AI and fair workforce distribution (between AI and radiologists), with a renewal of policy at regular intervals. Any errors made must have a root-cause analysis, with outcomes fedback to companies to close the loop-thus enabling a dynamic best prediction system. In the distant future, AI may act autonomously with little human supervision. Ethical training and integration can ensure a "transparent" technology that will allow insight: helping us reflect on our current understanding of imaging interpretation and fill knowledge gaps, eventually moulding radiological practice. This article proposes recommendations for ethical practise that can guide a nationalized framework to build a sustainable and transparent system.Entities:
Year: 2020 PMID: 33178959 PMCID: PMC7605209 DOI: 10.1259/bjro.20190020
Source DB: PubMed Journal: BJR Open ISSN: 2513-9878
Figure 1.Deep convolutional neural networks use simple processing "neurons" that connect in layers, with signals from neurons merging to a convoluted kernel at the next layer. Each layer weighs information from kernels and computes image features that are believed to be of importance in making the prediction or diagnosis of interest. Signals are transmitted between layers and the algorithm identifies the best combination of these image features for classifying the image and produces its output.[1,2] Furthermore, a process of “back propagation” makes minute alterations in individual neurons so the network learns to produce the correct output. Once the system has learnt from multiple images, it becomes an expert at recognizing a likely outcome such as a "pneumothorax." Adapted from: ‘New Theory cracks open the black box of deep neural networks’. Wired (10 August 2017): https://www.wired.com/story/new-theory-deep-learning/ [accessed 15/10/2018]
Figure 2.A comparison of diagnostic techniques used in recent in AI studies. Acquired from "Artificial intelligence in healthcare past present and future by Jiang et al." Permission for re-use has been granted
Figure 3.A Deep Dream Al generated images of MRI heads MRI heads were fed into the AI algorithm that produced outputs based on its own black box interpretation of the image. Developed using “Deep Dream Generator”, https://deepdreamgenerator.com/ [accessed 25/10/2018]