| Literature DB >> 33712588 |
James A Diao1,2, Jason K Wang1,2, Wan Fung Chui1,2, Andrew H Beck3, Hunter L Elliott1, Amaro Taylor-Weiner4, Victoria Mountain1, Sai Chowdary Gullapally1, Ramprakash Srinivasan1, Richard N Mitchell2,5, Benjamin Glass1, Sara Hoffman1, Sudha K Rao1, Chirag Maheshwari1, Abhik Lahiri1, Aaditya Prakash1, Ryan McLoughlin1, Jennifer K Kerner1, Murray B Resnick1,6, Michael C Montalto1, Aditya Khosla1, Ilan N Wapinski1.
Abstract
Computational methods have made substantial progress in improving the accuracy and throughput of pathology workflows for diagnostic, prognostic, and genomic prediction. Still, lack of interpretability remains a significant barrier to clinical integration. We present an approach for predicting clinically-relevant molecular phenotypes from whole-slide histopathology images using human-interpretable image features (HIFs). Our method leverages >1.6 million annotations from board-certified pathologists across >5700 samples to train deep learning models for cell and tissue classification that can exhaustively map whole-slide images at two and four micron-resolution. Cell- and tissue-type model outputs are combined into 607 HIFs that quantify specific and biologically-relevant characteristics across five cancer types. We demonstrate that these HIFs correlate with well-known markers of the tumor microenvironment and can predict diverse molecular signatures (AUROC 0.601-0.864), including expression of four immune checkpoint proteins and homologous recombination deficiency, with performance comparable to 'black-box' methods. Our HIF-based approach provides a comprehensive, quantitative, and interpretable window into the composition and spatial architecture of the tumor microenvironment.Entities:
Mesh:
Year: 2021 PMID: 33712588 PMCID: PMC7955068 DOI: 10.1038/s41467-021-21896-9
Source DB: PubMed Journal: Nat Commun ISSN: 2041-1723 Impact factor: 14.919