| Literature DB >> 29476392 |
Steve G Langer1, George Shih2, Paul Nagy3, Bennet A Landman4.
Abstract
Combining imaging biomarkers with genomic and clinical phenotype data is the foundation of precision medicine research efforts. Yet, biomedical imaging research requires unique infrastructure compared with principally text-driven clinical electronic medical record (EMR) data. The issues are related to the binary nature of the file format and transport mechanism for medical images as well as the post-processing image segmentation and registration needed to combine anatomical and physiological imaging data sources. The SiiM Machine Learning Committee was formed to analyze the gaps and challenges surrounding research into machine learning in medical imaging and to find ways to mitigate these issues. At the 2017 annual meeting, a whiteboard session was held to rank the most pressing issues and develop strategies to meet them. The results, and further reflections, are summarized in this paper.Entities:
Keywords: Computer analytics; Computers in medicine; Machine learning
Mesh:
Year: 2018 PMID: 29476392 PMCID: PMC5959829 DOI: 10.1007/s10278-017-0043-x
Source DB: PubMed Journal: J Digit Imaging ISSN: 0897-1889 Impact factor: 4.056
Fig. 1Conceptual pyramid of the objectives of the SiiM Machine Learning Committee (MLC). The committee’s github site has turnkey solutions (Dockers) to enable newcomers to the ML field get started without having to master multiple technology dependencies first (level 1–2). Then, as the person grows their knowledge, additional resources guide them to how to expand those base Dockers to address new problems (level 3). At the upper levels, the MLC aims to foster guidelines to conducting reproducible science, provide shared datasets, and foster collaborative research
Fig. 2Functional requirements for a cloud-based platform for conducting reproducible ML research. a Users upload de-identified data; the Orchestrator checks the submission in, holds it for curation, and crowdsources meta-tagging. b Another investigator uploads their model to try it out on the runtime platform on the existing datasets. The Orchestrator checks the code in and assigns access rights as per the author’s wishes. c The investigator locates data relevant to their project, submits it to their algorithm, and stores results back to the system. For greater detail, see the text