| Literature DB >> 30399149 |
Effy Vayena1, Alessandro Blasimme1, I Glenn Cohen2.
Abstract
Effy Vayena and colleagues argue that machine learning in medicine must offer data protection, algorithmic transparency, and accountability to earn the trust of patients and clinicians.Entities:
Mesh:
Year: 2018 PMID: 30399149 PMCID: PMC6219763 DOI: 10.1371/journal.pmed.1002689
Source DB: PubMed Journal: PLoS Med ISSN: 1549-1277 Impact factor: 11.069
Fig 1Imagine a medical software company developing a machine learning–based device.
The device performs fully automated analysis of histopathology slides from cancer patients and predicts genetic mutations in tumors solely based on these images. This inferred genetic information can be used either for prognostic purposes or to detect an indication for a targeted therapy. Users will not know which features of the images the algorithm associates with mutated genes or the biological explanation for these associations. The selling propositions of the device are that it can infer valuable genetic information early in the diagnostic process and be used in contexts in which genetic testing is not available by analyzing images shared by pathologists on a cloud-based platform. MLm, machine learning in medicine.