| Literature DB >> 34008926 |
Axel Petzold1,2, Philipp Albrecht3, Laura Balcer4, Erik Bekkers5, Alexander U Brandt6, Peter A Calabresi7, Orla Galvin Deborah8, Jennifer S Graves9, Ari Green10, Pearse A Keane1, Jenny A Nij Bijvank2, Josemir W Sander11,12,13, Friedemann Paul14, Shiv Saidha7, Pablo Villoslada15, Siegfried K Wagner1, E Ann Yeh16.
Abstract
Artificial intelligence (AI)-based diagnostic algorithms have achieved ambitious aims through automated image pattern recognition. For neurological disorders, this includes neurodegeneration and inflammation. Scalable imaging technology for big data in neurology is optical coherence tomography (OCT). We highlight that OCT changes observed in the retina, as a window to the brain, are small, requiring rigorous quality control pipelines. There are existing tools for this purpose. Firstly, there are human-led validated consensus quality control criteria (OSCAR-IB) for OCT. Secondly, these criteria are embedded into OCT reporting guidelines (APOSTEL). The use of the described annotation of failed OCT scans advances machine learning. This is illustrated through the present review of the advantages and disadvantages of AI-based applications to OCT data. The neurological conditions reviewed here for the use of big data include Alzheimer disease, stroke, multiple sclerosis (MS), Parkinson disease, and epilepsy. It is noted that while big data is relevant for AI, ownership is complex. For this reason, we also reached out to involve representatives from patient organizations and the public domain in addition to clinical and research centers. The evidence reviewed can be grouped in a five-point expansion of the OSCAR-IB criteria to embrace AI (OSCAR-AI). The review concludes by specific recommendations on how this can be achieved practically and in compliance with existing guidelines.Entities:
Mesh:
Year: 2021 PMID: 34008926 PMCID: PMC8283174 DOI: 10.1002/acn3.51320
Source DB: PubMed Journal: Ann Clin Transl Neurol ISSN: 2328-9503 Impact factor: 4.511
Expertise of the literature review committees.
| Expertise | Members |
|---|---|
| Patient voice | Nils Wiegerink (patient), Russel Wheeler (patient advocate), Christiaan Waters (President of patient organization), Avril Daily (Retina International, ERN‐EYE), Christina Fasser (Retina International, ERN‐EYE), Orla Galvin Deborah (Retina International, ERN‐EYE), and Oshakuade (Retina International, ERN‐EYE) |
| AI | Erik Bekkers, Siegfried Wagner, and Pearse Keane |
| Public relation & Media | Avril Daily |
| ALS | Philip Albrecht and Orhan Atkas |
| Alzheimer disease | Thomas Wisnewski |
| Epilepsy | Josemir W. Sander |
| Parkinson disease | Alexander Brandt, Philipp Albrecht, and Orhan Atkas |
| Stroke | Shadi Yaghi and Arvind CHANDRATHEVA |
| Multiple Sclerosis | Alexander Brandt, Peter Calabresi, Laura Balcer, Elliot & Tree Frohman, Friedeman Paul, Ari Green, Pablo Villoslada, Axel Petzold, Philipp Albrecht, Orhan Aktas, E. Ann Yeh, Bernardo Sanchez‐Dalmau, Jen Graves, Shiv Saidha, Robert Bermel, IMSVISUAL, and ERN‐EYE |
| Rare Diseases | Alexander Brandt, Philipp Albrecht, Orhan Atkas, Axel Petzold, Friedeman Paul, Frederike Oertel, Alexander Brandt, E. Ann Yeh, Avril Daily (Retina International, ERN‐EYE), Christina Fasser (Retina International, ERN‐EYE), Orla Galvin Deborah (Retina International, ERN‐EYE) Oshakuade (Retina International, ERN‐EYE), Bernardo Sanchez‐Dalmau, and ERN‐EYE |
| Ophthalmology | Bernardo Sanchez‐Dalmau, Pearse Keane, Siegfried Wagner and ERN‐EYE |
| Neuro‐ophthalmology | Fiona Costello, Ari Green, Axel Petzold, Laura Balcer, Bernardo Sanchez‐Dalmau, Jen Graves, and ERN‐EYE |
| OCT | Alexander Brandt, Frederike Oertel, Hannah Zimmerman, Philipp Albrecht, Orhan Atkas, Peter Calabresi, Axel Petzold, Jen Graves, Rachel Nolan‐Kennedy, Laura Balcer, Shiv Saidha, Bernardo Sanchez‐Dalmau, Pablo Villoslada, and Robert Bermel |
| OCTA | Benjamin Knier, Shiv Saidha, Axel Petzold and IMSVISUAL |
| Clinical trials OCT QC | Alexander Brandt, Friedeman Paul, Sven Schippling, Axel Petzold, Robert Bermel, Laura Balcer and IMSVISUAL |
| Statistics and epidemiology | David Crabb, Gary Cutter, Laura Balcer, Jen Graves, Rachel Nolan‐Kennedy, Kathryn Fitzgerald, and Zhaoxia Yu |
Terminology and basic concepts.
| Artificial Intelligence (AI) | Computer or machine‐based intelligence which enables “learning” and “problem solving” |
|---|---|
| Machine learning (ML) | One subset of AI. Typically algorithms improve automatically through experience after training on a dataset. ML can be supervised or unsupervised |
| Deep learning | One subset of ML essentially based on artificial neuronal networks. Very efficient and the basis of most contemporary AI‐based studies on image recognition |
| Supervised | Supervised ML works on a labeled training dataset (e.g., OSCAR‐IB OCT scans) and reproduces the desired outcome |
| Unsupervised | Unsupervised ML tries to discover previously undetected patterns in a dataset |
| Over‐fitting | Over‐fitting can be a problem with ML, a source of over‐enthusiastic reporting and reason for lack of reproducibility |
FIGURE 1The goal of quality control in Artificial Intelligence (AI) rests on five pillars: RASCO. (1) Openness with and trust in the public opinion, (2) to be Supportive for the patient–physician relationship, (3) Capability ranging from machine learning (ML)‐supported OCT quality control assessment to time and resource‐efficient decision‐making, (4) Accountability for decisions made, and (5) Reproducibility (RASCO).
FIGURE 2The capability of AI to contribute interpreting OCT images depends on the optimization of each step contributing to the decision tree. The first step relates to the quality of the raw data. Validated QC criteria for OCT image have been summarized as OSCAR‐IB. The ground truth of whether or not an OCT passes QC is based on human assessment. The seven OSCAR‐IB criteria for QC rejection by a human assessor can directly be used to train AI. Annotation of corrupted OCT scans permits for two outcomes: (1) image postprocessing and repair of artifacts or (2) complete rejection and (if feasible) recall of patient and OCT rescan. Only a dataset that passed OCT image QC should be used for further AI interpretation.
Summary of key points from the literature review on OCT and AI research in neurology. The categories are based on the mnemonic “RASCO”. This table may be found helpful in guiding future use of the reported data for AI‐based studies.
| Question | Answer |
|---|---|
|
| |
| OSCAR‐IB OCT quality control compliant? | Yes / No |
| APOSTEL OCT reporting guideline compliant? | Yes / No |
| TRIPOD‐AI compliant? | Yes / No |
| CONSORT‐AI compliant? | Yes / No |
| SPIRIT‐AI compliant? | Yes / No |
| STROBE compliant? | Yes / No |
|
| |
| Training, test & validation sets explained? | Yes / No |
| Potential for bias | Yes / No |
| Ground truth explicitly stated? | Yes / No |
| Statement on proportional bias given? | Yes / No |
| Precision‐recall curves provided? | Yes / No |
| Power calculations included? | Yes / No |
|
| |
| Patient voice included? | Yes / No |
| Conflicts of interest, including political, explained? | Yes / No |
| Shows how AI is used to enhance human performance? | Yes / No |
| Tested in clinical practice? | Yes / No |
|
| |
| Unsupervised AI? | Yes / No |
| Has QC capabilities? | Yes / No |
| Provides a glimpse into the black box? | Yes / No |
| Vulnerabilities of AI explained? | Yes / No |
| External Validation? | Yes / No |
|
| |
| Data availability statement? | Yes / No |
| Data deposited in repository? | Yes / No |
| AI deposited in open access code repository? | Yes / No |
Sources of bias can be analytical, clinical, statistical, imbalance in populations, or centres where the original research was conducted.
SeeTable 2
SeeFigure 2
See Figure 3
Vulnerabilities to artifacts, use of different devices, hard‐ or software updates of the OCT device.