| Literature DB >> 32652926 |
Jane Hung1,2, Allen Goodman2, Deepali Ravel3, Stefanie C P Lopes4,5, Gabriel W Rangel3, Odailton A Nery6, Benoit Malleret7,8, Francois Nosten9,10, Marcus V G Lacerda4,5, Marcelo U Ferreira6, Laurent Rénia8, Manoj T Duraisingh3, Fabio T M Costa11, Matthias Marti3,12, Anne E Carpenter13.
Abstract
BACKGROUND: A common yet still manual task in basic biology research, high-throughput drug screening and digital pathology is identifying the number, location, and type of individual cells in images. Object detection methods can be useful for identifying individual cells as well as their phenotype in one step. State-of-the-art deep learning for object detection is poised to improve the accuracy and efficiency of biological image analysis.Entities:
Keywords: Convolutional networks; Deep learning; Keras; Malaria; Object detection
Mesh:
Year: 2020 PMID: 32652926 PMCID: PMC7353739 DOI: 10.1186/s12859-020-03635-x
Source DB: PubMed Journal: BMC Bioinformatics ISSN: 1471-2105 Impact factor: 3.169
Fig. 1Overview of a traditional segmentation based pipeline and a deep learning based object detection pipeline. a. Traditional segmentation based pipelines require the selection and tuning of multiple classical image processing algorithms to produce a segmentation, where pixels associated with individual instances (e.g. nuclei, or cells) receive unique “labels”, represented here as different colors. b. Deep learning-based object detection pipelines require some example annotated images to be provided, and use neural networks to learn a model that can produce bounding boxes around each object, which can be overlapping. If multiple object classes are of interest (for example, multiple phenotypes), each bounding box is assigned a class. c. Code to train an object detection model, written using Keras R-CNN’s API. d. Graphs of cell counts of each infected type over time predicted on time course images. The time course set contains samples prepared at particular hours between 0 and 44 h and has been designed to synchronize the parasites’ growth and to show representation of all stages. The ground truth is based on Annotator 1, who annotated all images in the dataset including the training data