| Literature DB >> 32432952 |
Stephan Steigele1, Daniel Siegismund1, Matthias Fassler1, Marusa Kustec1, Bernd Kappler1, Tom Hasaka2, Ada Yee1, Annette Brodte1, Stephan Heyse1.
Abstract
Drug discovery programs are moving increasingly toward phenotypic imaging assays to model disease-relevant pathways and phenotypes in vitro. These assays offer richer information than target-optimized assays by investigating multiple cellular pathways simultaneously and producing multiplexed readouts. However, extracting the desired information from complex image data poses significant challenges, preventing broad adoption of more sophisticated phenotypic assays. Deep learning-based image analysis can address these challenges by reducing the effort required to analyze large volumes of complex image data at a quality and speed adequate for routine phenotypic screening in pharmaceutical research. However, while general purpose deep learning frameworks are readily available, they are not readily applicable to images from automated microscopy. During the past 3 years, we have optimized deep learning networks for this type of data and validated the approach across diverse assays with several industry partners. From this work, we have extracted five essential design principles that we believe should guide deep learning-based analysis of high-content images and multiparameter data: (1) insightful data representation, (2) automation of training, (3) multilevel quality control, (4) knowledge embedding and transfer to new assays, and (5) enterprise integration. We report a new deep learning-based software that embodies these principles, Genedata Imagence, which allows screening scientists to reliably detect stable endpoints for primary drug response, assess toxicity and safety-relevant effects, and discover new phenotypes and compound classes. Furthermore, we show how the software retains expert knowledge from its training on a particular assay and successfully reapplies it to different, novel assays in an automated fashion.Entities:
Keywords: cell-based assays; high-content screening; image analysis; imaging technologies; phenotypic drug discovery
Mesh:
Year: 2020 PMID: 32432952 PMCID: PMC7372584 DOI: 10.1177/2472555220918837
Source DB: PubMed Journal: SLAS Discov ISSN: 2472-5552 Impact factor: 3.341
Figure 1.Comparison of classical HCS analysis workflow versus deep learning-based HCS workflow. In a classical HCS analysis workflow (top), establishing the analysis procedure is labor- and time-intensive. The work is usually split between distinct roles and people (assay biologists, yellow, and image analysis experts, blue) and involves several handovers. This workflow requires tight coordination and quality control to guarantee robust assay outcomes. In a deep learning-based HCS workflow (bottom), the same results can be generated by a single scientist in a fraction of the time. The scientist is responsible for training data generation and curation using the HCS images as reference, which is the only hands-on step in an otherwise automated workflow.
Figure 3.(A) Visual representation of phenotypic space. Training data displayed in the well similarity map. Each point represents a single well; wells containing similar phenotypes cluster closely together. Classes are assigned by manually drawing a gate (colored polygons) around closely clustered points, followed by labeling with the class name. Visual guidance is provided by color-coding of wells by their metadata for appropriate labeling and subset selection for training. In this figure, coloring represents neutral control versus compound wells. Sample images of wells from each class, which have visually distinct phenotypes, are shown. (B) Cell-level plots of selected wells (red highlighting in A). Contour and color-coded density plots enable a clearer interpretation of the maps and definition of population gates based on densities. Each region contains between ~100 and 24,000 data points; as a visual aid, outlier cells are displayed as dots only when a density below 5% is reached. (C) A visual, side-by-side review of example images belonging to each class. Upon visual inspection, any image judged as not belonging to the assigned phenotype can be removed by the user.